| Location | Date | OSHA TWA | Dose % | Leq | L10 | L50 | L90 | Peak | NIOSH TWA | Status |
|---|
noise_analysis.py) that processes raw data at native 0.1-second resolution using only Python standard library modules (math, sys, os, datetime) — no third-party dependencies. Seven cross-validation checks were applied to each of the three measurement files (21 total checks). All 21 checks passed. This documentation is intended for peer review and independent reproduction of results.
Instrumentation & Data Acquisition
Complete measurement chain from acoustic transducer through data export.
Equipment Specifications
| Component | Model | Specification | Role |
|---|---|---|---|
| Microphone | PCB Piezotronics 130F21 | Prepolarized free-field condenser, 20 Hz – 10 kHz | Acoustic transduction |
| DAQ System | Dewesoft IOLite 8xACC | SN DB23003583, 10 kHz sample rate per channel, simultaneous sampling | Signal digitization |
| Software | DewesoftX ver. 241202 | Reduced-data export, MAX statistic per 0.1 s window | Data acquisition & export |
| Host Computer | Dell P131F | Running DewesoftX, data stored to local DXD files | Acquisition platform |
Microphone Deployment
| Location | Mic SN | Data Format | Timestamps | Timestamp Source |
|---|---|---|---|---|
| Bullpen (Office) | 70785 | TAB-delimited TXT | Verified | Hardware-embedded in file headers |
| Build Area (Construction) | 68665 | Comma-separated CSV | Approximate | Not embedded — 08:00 start assumed |
Sampling Protocol
Continuous acoustic data were acquired at a sample rate of 10,000 samples per second (10 kHz) per channel. DewesoftX applies A-frequency weighting in the signal processing chain and exports reduced data at 0.1-second intervals, reporting the maximum instantaneous A-weighted sound pressure level within each 100 ms window. Each data file contains four columns: elapsed time (seconds), raw unweighted pressure (Pa), A-weighted SPL (dBA, computed as 20·log10(PA/20 μPa)), and A-weighted pressure (Pa).
Measurements were conducted over three consecutive workdays: February 11, 12, and 13, 2026. Recording durations varied by day (8.25 – 8.77 hours) depending on actual shift start/end times. The Bullpen microphone recorded a total of 910,393 data points across the three-day period.
Timestamp Integrity Notice
The Bullpen microphone (SN 70785) data files contain hardware-embedded recording start times in the file headers (e.g., Start time: 2/11/2026 08:48:22.985). These have been independently verified against elapsed-time data and are used for all time-of-day alignment. The Build Area microphone (SN 68665) data files contain only elapsed time (seconds from 0) with no absolute clock reference. An assumed start time of 08:00 was applied for visualization purposes. This assumption is unverified and affects only time-domain charts — it does not affect TWA, dose, Leq, or any compliance determinations, which are computed from dBA values independent of clock time.
Mathematical Framework
Complete derivations for all noise exposure metrics, per OSHA 29 CFR 1910.95 and NIOSH Criteria for a Recommended Standard (1998).
2.1 — Equivalent Continuous Sound Level (Leq)
Leq is the energy-equivalent continuous sound level — the constant SPL that would deliver the same total acoustic energy as the time-varying signal over the same duration. It is computed by converting each dBA sample to energy (power of 10), averaging, then converting back to decibels. This is the fundamental metric from which all dose calculations derive. Note that Leq will always be greater than or equal to the arithmetic mean of the dBA values (Jensen's inequality for convex functions).
2.2 — OSHA Noise Dose (29 CFR 1910.95)
Ti = ∞ for Li < 80 dBA (sample excluded from dose calculation)
The OSHA 5 dB exchange rate means that for every 5 dB increase in noise level, the allowable exposure time is halved. At 90 dBA, allowable time is 8 hours; at 95 dBA, 4 hours; at 100 dBA, 2 hours. The 80 dBA threshold excludes quiet periods from dose accumulation. This is a deliberate regulatory choice — OSHA considers noise below 80 dBA to not contribute meaningfully to hearing damage risk during an 8-hour shift.
2.3 — OSHA Time-Weighted Average (TWA)
The TWA converts a cumulative dose percentage back to an equivalent steady-state noise level over an 8-hour reference period. A 100% dose corresponds to a TWA of exactly 90 dBA (the PEL). A 50% dose corresponds to a TWA of 85 dBA (the Action Level). Values are not extrapolated — if measurement duration is less than 8 hours, the remaining time is assumed below threshold (contributing zero additional dose).
2.4 — NIOSH Dose & TWA (Criteria Document, 1998)
NIOSH uses the 3 dB exchange rate, which corresponds to the equal-energy hypothesis — a physically correct representation where doubling the sound energy (3 dB increase) halves the allowable time. This is more conservative than OSHA's 5 dB rate and produces higher dose values for the same exposure. At 85 dBA the allowable time is 8 hours; at 88 dBA, 4 hours; at 91 dBA, 2 hours. For any given noise profile, NIOSH dose ≥ OSHA dose — this relationship serves as a built-in validation check.
2.5 — Statistical Percentiles (LN)
Statistical noise levels are computed by sorting all N samples in ascending order and selecting the value at the appropriate rank position. LN represents the level exceeded N% of the time: L90 (exceeded 90% of time) characterizes the background noise floor; L50 is the median exposure; L10 (exceeded only 10% of time) captures loud transient events. Percentile ordering must satisfy: L90 ≤ L75 ≤ L50 ≤ L25 ≤ L10. Violation of this ordering would indicate a sort or indexing error in the computation.
Data Processing Pipeline
End-to-end data flow from raw acquisition through independently verified results.
Acoustic Acquisition
PCB 130F21 microphones sampled at 10 kHz via Dewesoft IOLite 8xACC. DewesoftX applied A-frequency weighting and reduced the raw 10,000 samples/second stream to 0.1-second MAX intervals, producing one data point per 100 ms window representing the peak SPL within that window.
Data Export
DewesoftX exported reduced data as text files — TAB-delimited for Bullpen, comma-separated for Build Area. Each row contains: elapsed time (s), raw unweighted pressure (Pa), A-weighted SPL (dBA), A-weighted pressure (Pa). One file per microphone per measurement day. File headers include equipment metadata and, for the Bullpen mic, the hardware-embedded recording start time.
Python Ingestion (noise_analysis.py)
The verification script reads each text file using Python's built-in open() and str.split(). Header lines are parsed for metadata (start time, source file path). Data rows are validated: the A-weighted dBA value is cross-checked against the independently computed value (20·log10(PA/20 μPa)) and flagged if deviation exceeds 0.01 dB. No third-party packages are used — no numpy, no pandas, no scipy. Only: math (logarithms, powers), os (file paths), sys (arguments), datetime (timestamp parsing).
Metric Computation
For each file, the script iterates all N samples and computes: Leq via energy-average summation; OSHA dose with 80 dBA threshold and 5 dB exchange rate; NIOSH dose with 80 dBA threshold and 3 dB exchange rate; TWA back-calculated from dose; Lmin, Lmax, and arithmetic mean; percentiles (L90, L75, L50, L25, L10, L95, L99); threshold exceedance event detection (continuous periods above 85, 90, 100, and 115 dBA); and hourly Leq breakdowns with wall-clock timestamps.
Cross-Validation
Seven independent validation checks are applied per file (see Section 04). These include mathematical invariants (Jensen's inequality, percentile ordering), computational reproducibility checks (Leq recompute, dose recompute, TWA back-calculation), and consistency checks (sample count, NIOSH ≥ OSHA). All 21 total checks (7 per file × 3 files) passed.
Dashboard Integration
Verified values are embedded directly into the dashboard as JavaScript constants (DATA.summary). No runtime computation from raw data occurs in the browser. Each summary entry includes a timestamp_source field ("hardware" or "inferred") that drives the timestamp reliability indicators throughout the interface.
Design Choice: Standard Library Only
The verification script deliberately avoids third-party dependencies. Every mathematical operation (math.log10, math.pow, Python's ** operator, explicit for-loop summation) is visible and auditable line-by-line. There are no opaque vectorized operations. Any reviewer can trace the computation from raw input to final result in a single Python file. The script runs identically on Python 3.6 through 3.13+ with zero pip install requirements.
Verification Process & Results
Seven independent cross-validation checks applied to each data file. All 21 checks passed.
Validation Check Descriptions
Per-File Results
| Date | Samples | Duration | Jensen | Leq Recomp | Dose Recomp | TWA Back-calc | Count | Percentile |
|---|---|---|---|---|---|---|---|---|
| Feb 11 | 296,852 | 8.25 hrs | PASS | PASS | PASS | PASS | PASS | PASS |
| Feb 12 | 297,958 | 8.28 hrs | PASS | PASS | PASS | PASS | PASS | PASS |
| Feb 13 | 315,583 | 8.77 hrs | PASS | PASS | PASS | PASS | PASS | PASS |
21 of 21 validation checks passed (100%) across 910,393 Bullpen data points
Limitations & Assumptions
Transparent disclosure of methodological constraints, consistent with academic reporting standards.
Build Area Timestamp Assumption
Microphone SN 68665 (Build Area) did not embed recording start times in its data files. An assumed 08:00 start was applied for time-of-day alignment. This assumption is unverified and may introduce systematic error in hourly breakdowns and time-series visualizations. Aggregate metrics (TWA, dose, Leq, percentiles) are unaffected, as they are computed from the full set of dBA values independent of absolute time ordering.
Area Monitoring vs. Personal Dosimetry
Two fixed monitoring locations do not constitute personal dosimetry. Worker mobility means individual exposures may differ substantially from area measurements. Per OSHA 29 CFR 1910.95(d)(1), compliance determination formally requires personal noise exposure measurement using dosimeters worn by representative workers. These area measurements characterize the acoustic environment, not individual worker exposure.
0.1-Second MAX Reduction Bias
The DewesoftX export reports the maximum instantaneous SPL within each 0.1-second window, not the RMS or Leq within that window. For time-varying noise, MAX values systematically exceed RMS values, producing a conservative (higher) estimate of Leq and TWA. This positive bias is most pronounced for impulsive noise with short duration peaks. For steady-state noise, MAX ≈ RMS and the bias is negligible. The conservative direction is appropriate for occupational safety assessments.
Measurement Duration Normalization
Recording durations were 8.25, 8.28, and 8.77 hours — not exactly 8.0 hours. OSHA TWA is referenced to an 8-hour criterion period. The dose accumulated during the actual measurement period is used directly in the TWA formula, with the assumption that any time beyond the measurement contributes zero dose (exposure below threshold). If actual worker exposure extends beyond the recorded period, the true TWA could differ.
A-Weighting Only
All values are A-weighted (dBA). No octave-band analysis, C-weighted, or Z-weighted measurements were performed. A-weighting approximates human hearing sensitivity and may underestimate risk from low-frequency noise exposures (particularly below 500 Hz). If significant low-frequency components are suspected, octave-band analysis per ANSI S1.11 should be conducted.
Three-Day Sample Window
Three consecutive workdays (February 11–13, 2026) may not be representative of long-term or seasonal exposure patterns. Construction noise varies with project phase, equipment utilization, weather, and crew size. A full exposure assessment per OSHA Technical Manual Section III, Chapter 5 should include measurements across representative work conditions.
Verification Scope
The Python verification validates mathematical correctness of calculations against the exported data files. It does not independently verify: microphone calibration accuracy, DAQ analog-to-digital conversion fidelity, DewesoftX's A-weighting filter implementation, or the 0.1-second MAX reduction algorithm. These are accepted per manufacturer specifications and assumed correct for the purposes of this analysis. Pre- and post-measurement calibration checks were not available for review.
Common Questions
Anticipated questions from reviewing engineers and colleagues.
math.log10(), 10.0 ** (li / 10.0), explicit for-loop summation — is visible and traceable in the source code. There are no opaque vectorized operations or implicit broadcasting rules. Any reviewer can follow the computation from raw input to final result in a single Python file. It also eliminates version-dependency risk: the script runs identically on Python 3.6 through 3.13+ with zero pip install requirements. For 296,000–315,000 data points per file, the performance difference between a numpy vectorized operation and a pure Python loop is under 2 seconds — irrelevant for a one-time verification run.Start time: field in the 11-line header (e.g., Start time: 2/11/2026 08:48:22.985). The Build Area microphone's export files contain no such field — the time column begins at 0.0 seconds elapsed. Without a hardware-recorded start time, there is no way to anchor the elapsed time to wall-clock time from the data alone. The assumed 08:00 start was a reasonable guess but cannot be verified without external evidence (e.g., operator logs, security camera timestamps, or GPS-synced time sources).noise_analysis.py script requires only a Python 3.x interpreter with no third-party packages. Place the DewesoftX-exported text files in the same directory and run: python3 noise_analysis.py. The script auto-discovers files matching the expected naming pattern, processes each file independently, outputs all computed metrics, and runs all 7 validation checks with explicit PASS/FAIL status. The entire analysis — loading, computing, and validating 910,393 data points — completes in under 30 seconds on commodity hardware.Over three days of continuous area monitoring in February 2026, noise levels observed at the Bullpen / office location were consistently below federal regulatory thresholds for hearing protection. Here is what that means in practice.
Exposure Assessment Summary
Observed area noise levels did not approach regulatory action thresholds
A professional-grade microphone was placed in the Bullpen area and recorded continuously for three full work days (February 11–13, 2026), capturing nearly one million individual sound level readings at a rate of 10 per second.
The recorded levels were evaluated against two regulatory standards: the OSHA Permissible Exposure Limit (PEL) of 90 dBA and the more protective NIOSH Recommended Exposure Limit (REL) of 85 dBA. On every day measured, area noise levels were below both of these thresholds by a wide margin.
Note: This was an area monitoring assessment. Results reflect noise levels at the monitoring location and should not be interpreted as individual worker exposure determinations.
What do those numbers actually mean?
71.8 dBA is comparable to the background noise level in a busy restaurant or a car traveling at highway speed with the windows up. It is noticeably louder than a quiet office (around 50–55 dBA) because the Bullpen sits adjacent to an active construction area, but it is well below levels that would trigger regulatory action.
The 85 dBA action level is roughly the volume of a food blender or a busy city street. At that level, OSHA requires employers to offer hearing protection and annual hearing tests. The office area did not come close to that threshold on any day measured.
The 90 dBA permissible exposure limit is about as loud as a lawn mower at operating distance. Sustained exposure at or above that level over an 8-hour shift requires hearing protection. The office area was nearly 20 dBA below this — and because the decibel scale is logarithmic, that represents roughly a 100-fold difference in sound energy.
How the Monitoring Worked
Professional microphone, continuous recording
A PCB Piezotronics precision microphone was positioned in the Bullpen area and connected to a Dewesoft data acquisition system. The equipment recorded the A-weighted sound pressure level 10 times per second, every second, for the full duration of each work shift. Over three days, the Bullpen microphone alone captured 910,393 individual readings.
This is the same class of equipment and methodology used by professional industrial hygienists and acoustic consultants. The difference is that this was an area monitoring assessment — the microphone was placed at a fixed location in the workspace, not worn by an individual employee.
Three days of data, independently verified
Every metric in this dashboard — the daily averages, the dose percentages, the peak levels — was computed from the raw data using a transparent Python script that runs 7 independent mathematical cross-checks per day of data. All 21 checks passed. The verification process is documented in the Methodology & Verification tab of this dashboard for anyone who wants to review it.
Why the Daily Average Matters More Than Loud Moments
How noise-induced hearing loss actually works
Hearing loss from noise is not caused by individual loud sounds in most workplace settings. It is caused by cumulative energy delivered to the inner ear over time. Inside the cochlea, thousands of microscopic hair cells convert sound vibrations into electrical signals sent to the brain. These hair cells do not regenerate — once damaged, they are permanently lost.
The damage mechanism is metabolic: sustained sound energy causes oxidative stress in hair cells, leading to cell death over hours and days. A brief loud event (a door slam, a tool drop) delivers a tiny fraction of the total energy compared to hours of steady moderate noise. This is why regulators measure exposure as a time-weighted average (TWA) over the full work shift, not as a peak reading.
The 8-hour shift is the unit of measurement
OSHA’s standard evaluates noise as if all the sound energy from an entire shift were compressed into a single steady level. A TWA of 85 dBA means that over 8 hours, your ears received the equivalent energy of a constant 85 dBA tone — roughly the volume of a food blender running nonstop. A TWA of 70 dBA (typical for the Bullpen) is equivalent to the hum of a television at moderate volume sustained across the full day.
Because the decibel scale is logarithmic, this difference is larger than it appears: 85 dBA delivers roughly 30 times more sound energy than 70 dBA. The Bullpen’s observed levels are not just below the threshold — they represent a fundamentally lower energy exposure.
Common Concerns
“I can hear the construction — is that harmful?”
Hearing construction activity does not mean the noise is at a harmful level. The human ear can perceive a wide range of sound levels. Conversations happen at around 60–65 dBA; the Bullpen averaged about 70 dBA (the equivalent of a television at moderate volume). You can hear the construction, but the sound energy reaching the office area is significantly attenuated by distance and building materials.
“Some moments seem really loud — what about the peaks?”
The loudest single reading in the Bullpen over three days was 106.9 dBA (a brief impulse on February 13). Occasional short peaks are normal — a door slamming nearby can register over 100 dBA for a fraction of a second. Hearing risk is determined by sustained average exposure over the full work shift, not by individual spikes. The daily averages (TWA) account for both loud and quiet periods, and those averages were below regulatory thresholds on every day monitored.
“What about long-term exposure — is this three-day sample enough?”
Three days of continuous monitoring provides a meaningful initial snapshot of workplace noise conditions. Construction activity varies by project phase, crew size, equipment in use, and other factors, so noise levels on other days could be different. If you have concerns about a specific period or activity that seemed louder than usual, please bring that to the EHS team — additional monitoring can be arranged.
“What about the Build Area — the construction zone?”
A second microphone was placed in the active construction zone. Those levels were higher (daily averages around 77–80 dBA), which is expected in a construction environment. While still below the OSHA PEL, the Build Area levels are closer to regulatory thresholds and are tracked separately on this dashboard. Personnel working directly in the construction zone should follow any site-specific hearing protection requirements.
What Happens Next
This is an area assessment, not a personal exposure determination
The microphone measured noise at a fixed point in the workspace. Individual exposure can vary depending on where you work, how long you spend in different areas, and what tasks are being performed nearby. This assessment indicates that the general noise environment in the Bullpen area is well within safe ranges, but it does not measure what any specific person experienced over their individual shift.
A personal dosimetry study — where small noise monitors are worn by individual workers throughout the day — is the recognized method for determining individual exposure. That type of assessment may be conducted as a follow-up to confirm these area monitoring observations with person-specific data.
Questions or concerns?
If you experience ringing in your ears (tinnitus), difficulty hearing conversations, or feel that noise in your work area has increased significantly, please reach out to the EHS team. These observations are valuable even when instrument readings indicate safe levels — your subjective experience matters and can help us identify areas for follow-up monitoring.
Scope of This Assessment
- Type: Area noise monitoring (fixed microphone placement) — not personal dosimetry
- Location measured: Bullpen / office staging area adjacent to active construction
- Duration: Three consecutive work days (February 11–13, 2026), full shift each day
- Data collected: 910,393 sound level readings at 0.1-second intervals
- Standard applied: OSHA 29 CFR 1910.95 (PEL 90 dBA, Action Level 85 dBA) and NIOSH REL (85 dBA)
- Outcome: All daily time-weighted averages were below the OSHA Action Level — a Hearing Conservation Program is not triggered by these measurements
- Limitation: Area monitoring observations are not a substitute for personal exposure assessment and do not constitute a formal compliance determination
This dashboard was built using Claude Code (powered by Claude Opus 4.6) as an agentic partner to a Certified Safety Professional. Neither could have produced this alone — the CSP brought regulatory expertise, field context, and professional judgment; Claude brought computational power, visualization capabilities, and the ability to iterate at speed. Together, they turned raw microphone data into a verified, interactive compliance analysis in a fraction of the time and cost of traditional methods.
About the Analyst
Data interpreted by a Certified Safety Professional (CSP)
The analysis, interpretation, and presentation of this monitoring data was performed by a Certified Safety Professional (CSP). The CSP credential is administered by the Board of Certified Safety Professionals (BCSP) and accredited by ANAB, the same body that accredits ISO management system certifications.
In practical terms, the CSP serves a similar role in safety and health that a Professional Engineer (PE) serves in engineering: it is a competency-based credential that requires a combination of formal education, professional experience, and passing a comprehensive examination covering hazard recognition, risk assessment, regulatory compliance, and occupational health principles. Many organizations require the CSP for senior-level environmental health and safety positions.
The CSP designation recognizes that the holder has the education, experience, and demonstrated competency to lead workplace safety assessments, interpret monitoring data in regulatory context, identify limitations, and use findings to make informed, data-driven decisions about the work environment.
How This Dashboard Was Built
Domain expertise paired with AI tooling
This project was built through an iterative collaboration between a CSP and Claude Opus 4.6 (Anthropic) via Claude Code, a command-line AI development tool. Over roughly 54 hours and 46 version-controlled commits, the dashboard went from raw instrument data files to the interactive report you see now.
The CSP brought the domain knowledge: understanding of OSHA 29 CFR 1910.95, noise dosimetry mathematics, the distinction between area monitoring and personal dosimetry, how ELT audiences read compliance reports, and the professional judgment to interpret results in context. Claude brought the ability to write and iterate on code rapidly — building charts, computing energy-averaged metrics from nearly 3 million raw readings, cross-validating calculations, and refactoring the dashboard through dozens of design iterations.
Where domain knowledge made the difference
This was not a one-directional process. Several critical decisions required domain expertise that no AI system would generate independently:
Catching a regulatory math error. The initial implementation applied OSHA’s 5 dB exchange rate (16.61 × log10) to the NIOSH TWA calculation. The CSP recognized the error — NIOSH uses a 3 dB exchange rate (10 × log10) — and directed a recalculation from raw data. The corrected values shifted by up to 1.4 dBA. Small numbers, but the wrong formula applied to the wrong standard is a credibility problem in any regulatory context.
Rejecting charts that were technically correct but wrong for the audience. The project went through ridgeline density plots, polar/radial charts, and L10/L50/L90 dumbbell comparisons before settling on the current visualization set. Each was mathematically valid. Each was removed because the CSP understood that an executive will skip a chart they don’t immediately understand — not misread it, just ignore it. The time-in-zone stacked bars and heatmaps survived because they use intuitive metaphors: wider band = more time, warmer color = louder.
Language that preserves credibility. The AI’s initial labeling used “>90 dBA (Dangerous)” with red styling for the highest noise zone — even though zero time was recorded there. The CSP changed it to “90+ dBA” in amber. The reasoning: if an executive sees the word “Dangerous” anywhere on a page that concludes “compliant,” they will question the conclusion regardless of context. Softened language for zones with zero exposure is not less accurate — it is more honest about what the data actually shows.
Data artifact identification. Build Area Feb 13 contained a single minute of data at hour 16 (66.6 dBA) while all other hourly values were based on 45–60 minutes of readings. Left in, it created a misleading visual dip. The CSP and AI traced this through the embedded data, cross-referenced against the raw instrument file (which confirmed only 1 observation at 16:00:00), and nulled the point. The line now ends at the last complete hour rather than showing a single-minute artifact as if it were representative.
Scope and limitations honesty. The CSP insisted on language distinguishing area monitoring observations from personal exposure determinations — a regulatory distinction that matters legally and that an AI would not enforce unprompted. Every interpretive statement on this dashboard uses observational framing (“noise levels observed at the monitoring location”) rather than deterministic framing (“worker exposure was below limits”).
Custom skills and institutional knowledge
Over the course of this project, the design lessons learned were encoded into a reusable skill file — a structured set of 24 rules for building EHS dashboards for executive audiences, organized by severity (Critical, High, Medium) across six categories: content ordering, language and tone, chart selection, visual design, mobile accessibility, and technical quality.
Each rule traces directly to a specific decision made during this build. For example: “No alarming language for compliant results” (Critical) came from the >90 dBA labeling incident. “No statistical charts without translation” (Critical) came from removing the ridgeline and dumbbell charts. “Dark mode must be intentional” (High) came from rejecting the first blue-slate palette in favor of true black with neutral grays.
Future EHS dashboards built with this tool will start from these 24 rules rather than re-learning them. This is how AI tooling compounds institutional knowledge: each project teaches the system something concrete, and that knowledge persists across sessions as version-controlled, transparent configuration — not as opaque model weights, but as human-readable rules that the CSP authored and owns.
What This Enables
Faster, deeper analysis from raw data to executive report
A traditional path for this type of work would typically involve engaging an industrial hygiene consulting firm: scheduling a site visit, deploying equipment, collecting data, sending it to the consultant for analysis, receiving a written report, and then separately creating any visualizations or presentations for leadership. That process commonly takes 2–6 weeks from data collection to final deliverable and can cost $3,000–$10,000+ depending on scope, location, and firm.
This dashboard — from raw instrument data files to a fully interactive, verified, executive-ready report with 7 independent mathematical cross-checks per day — was built in a matter of days, with the data analysis, visualization, verification, and presentation all happening in a single iterative workflow. The raw data and every computation are embedded directly in the deliverable, making the entire analysis transparent and reproducible.
This is not to suggest the traditional approach is unnecessary. Consulting IH firms bring calibrated equipment programs, established sampling protocols, legal defensibility through chain-of-custody documentation, and the depth of experience that comes from conducting hundreds of assessments across industries. The point is that a CSP with domain knowledge and the right AI tooling can now do more of this work in-house, more quickly, and at a level of analytical depth and transparency that would be difficult to achieve manually.
Complementing, not replacing, industrial hygiene professionals
CSPs and CIHs (Certified Industrial Hygienists) operate in closely related fields. The CIH credential focuses specifically on anticipating, recognizing, evaluating, and controlling workplace health hazards — noise exposure being one of many. The CSP credential covers a broader safety and health scope. Both credentials are administered by BCSP-affiliated boards and share a commitment to evidence-based workplace protection.
This project demonstrates that a CSP equipped with AI tooling can conduct meaningful area noise monitoring analysis, present results at an executive level, and deliver faster turnaround on initial assessments. It does not replace scenarios where a CIH or qualified IH consultant is the appropriate resource: formal compliance determinations, personal dosimetry programs, complex multi-hazard exposure assessments, expert witness work, or situations requiring the specific legal weight of an IH professional’s signature.
The right framing is force multiplication: AI tooling allows a safety professional with domain expertise to do more, faster, and with greater analytical transparency — extending their reach into work that might otherwise wait for external resources or not get done at all.
Limitations and Transparency
What this project is and is not
- This is an area noise monitoring assessment, not a personal exposure determination or formal compliance audit
- The AI assisted with code, computation, visualization, and data verification — all domain judgments and interpretive conclusions are the responsibility of the CSP
- Mathematical cross-checks were performed programmatically (7 per day, 21 total), but the underlying data quality depends on proper instrument calibration and deployment, which was managed by the analyst
- The AI does not have independent knowledge of the physical monitoring environment — it works with the data as provided and flags anomalies, but cannot verify field conditions
- Cost and timeline comparisons to traditional IH consulting are estimates based on typical industry ranges and will vary by region, firm, and scope
- This dashboard is a demonstration of what is possible when domain expertise meets AI tooling — it should be evaluated on the quality of its analysis and transparency, not on the novelty of how it was built
Project Details
- Analyst: Certified Safety Professional (CSP)
- AI assistant: Claude Opus 4.6 (Anthropic) via Claude Code CLI
- Data processing: Python script with 21 independent mathematical cross-checks
- Dashboard: Single-file HTML with Chart.js and D3.js — no build step, no framework, fully self-contained
- Source data: ~2.9 million raw readings from PCB 130f21 microphones via Dewesoft IOLite 8xACC DAQ
- Version control: All code and iterations tracked in Git with full commit history