Examples of Excellence

Selected outcomes from high-impact automation recoveries

As an Emergency Automation Support Engineer, Justin has supported operations at over 100 Various Warehouses and Manufacturing Facilities across the United States. During each high-impact event, he quickly stabilized production, coordinated with the right stakeholders, led detailed root-cause analyses, and ensured that lessons learned were clearly communicated to improve site operations and maintenance teams.

Performance Improvement at Amazon Fulfillment Center

At one Amazon Fulfillment Center, a detailed performance review revealed significant reductions in system jams and defect rates while overall throughput increased. By Week 17, total jams had dropped to 10,960 — a 36.5% decrease compared to Week 8. DPMO improved to 1,870 in Week 17, representing a 43.1% reduction from Week 3. What makes this improvement especially notable is that these results occurred during periods of higher operational demand. For example, Week 14 saw a 26.8% increase in building volume compared to Week 8, yet jams were 15.3% lower. Comparing the peak jamming week (Week 8) to the latest data (Week 17), volume rose by 11.7% while jams declined by 36.5%. These results demonstrate the effectiveness of continuous process optimization and collaboration between engineering, maintenance, and operations teams — reinforcing the value of rapid analysis, targeted adjustments, and strong communication across all levels.

Sorter-Wide Tracking Jams Due to Electrical Interference

A FlexSort sorter experienced repeated building-wide stoppages caused by false package signals that triggered system tracking jams. The downtime impacted multiple sort lanes and delayed outbound flow across the entire operation. Justin traced the issue to an ungrounded 480V servo cable that was inducing electrical noise into low-voltage photoeye circuits. By grounding the cable and retraining sensors, the site restored full operational stability. The root cause was improper cable shielding and grounding during installation. Preventive measures were introduced to verify servo grounding and shielding integrity during future deployments.

Building-Wide Mis-Diverts After Server Replacement

After a server replacement, a facility experienced widespread mis-diverts where pick modules routed packages to incorrect zones. The misconfiguration disrupted warehouse flow and halted order processing. Justin collaborated with IT and software engineers to identify that the “pick-to-voice” feature had not been re-enabled following the new server installation. Once the setting was restored and barcode scanners were realigned, all modules resumed normal operation. The issue stemmed from an unverified post-upgrade configuration, and Justin implemented a server validation checklist to ensure all WMS-to-sorter settings are confirmed before go-live.

Network Instability Across Crane Systems

A site experienced intermittent communication loss across multiple cranes tied to a shared Master #1 network. The instability resulted in production delays and operational safety risks due to unresponsive automation zones. Justin conducted a network diagnostic and pinpointed a failing industrial switch that caused recurring packet loss. The faulty switch was replaced, restoring stable communication across all devices. The issue originated from hardware degradation, and a preventive maintenance plan was established for quarterly network health checks and redundant switch paths.

Multi-Zone Merge Jam Escalation

Multiple merge and ECC zones were simultaneously jamming due to mismatched acceleration and braking parameters. The misalignment slowed throughput and increased jam frequency across several subsystems. Justin analyzed the PLC zone configurations, synchronized acceleration profiles, and trained the on-site controls team on parameter tuning best practices. The event highlighted the need for better consistency between mechanical tuning and controls configuration, leading to the creation of a zone-by-zone validation checklist used at startup.

System-Wide Photoeye Signal Instability

Throughout an entire sortation system, photoeyes were intermittently flickering and generating false readings, halting multiple zones. Justin traced the anomaly to a power supply outputting 29VDC without a proper DC ground reference. By replacing and grounding the supply, sensor readings stabilized and false stops ceased. The root cause was an ungrounded DC circuit producing unstable voltage levels. A preventive measure was introduced requiring ground verification during every preventive maintenance inspection.

Site-Wide Instability Post-Launch

A newly launched fulfillment site faced persistent throughput losses and frequent automation faults during its initial weeks of operation. Justin led a full-system triage effort, covering mechanical alignment, PLC logic, and software coordination. After prioritizing high-impact bottlenecks and refining control logic, throughput stabilized and alarms were significantly reduced. The underlying cause was inconsistent commissioning standards across subsystems. A post-launch validation checklist was created to confirm full functional readiness before operational handoff.

← Back to Home