“Often, it’s better to go with simple solutions that require teamwork and individual accountability rather than trying to automate the hell out of something.”
When I was in graduate school I studied with Prof. Dick Conway, who is considered one of the “fathers of scheduling” for his early work on production scheduling. In one class, we formed small teams and were given the challenge of designing a manufacturing system for an electronic product. I can’t remember what the product was, exactly, but it was roughly as complex as, say, an office copier. My team, and most others, envisioned a highly automated production line, run by software that would provide a sort of electronic Kanban system that would send signals down the production line to the various work stations. This (overly) complex software system was supposed to balance the production steps so that work-in-process inventory could be kept to a minimum. In retrospect, it was a kind of sci-fi solution that would have never worked well, especially given the sophistication of supply-chain systems at that time (the early 1990’s), and given the amount of processing power you could buy on a typical CPU at the time. But apart from those limitations, Prof. Conway showed us that the complexity of our solutions was both detrimental, and unnecessary.
It turned out that the best solution was not to use an automated production line at all. Instead, it was better to use small teams of workers that built a unit, essentially from start to finish, testing the unit after each step in the manufacturing process. Part of our course of study was to drive around the Northeast in a couple of vans with the Professor, and tour various manufacturing facilities and discuss what we saw. During one of those trips, sure enough, we visited a factory that was building their products exactly like this—with small teams of workers taking ownership of their work–with good results. This approach had many benefits vs. the fancy approach that we (the grad students) proposed: assuring that defects were caught promptly and fixed before they started piling up at the next step in the production process, minimizing work in process inventory, and allowing the team to take ownership over the quality of their work and productivity. Maybe my team’s fancy, automated approach could have worked, but it probably would have failed, and it wasn’t the best approach to solving the problem anyway. Often, it’s better to go with simple solutions that require teamwork and individual accountability rather than trying to automate the hell out of something.
I’m reminded of this story when people talk about other attempts to apply computers to solve problems that are best solved with less elaborate approaches. In this case I’m talking about supply-chain systems. Or at least the idea that supply-chain systems will one day have all seeing, all knowing “visibility” into the supply-chain, and an ability to respond to supply-chain events with automated, intelligent actions. Maybe these systems will come along one day, but I don’t think that day is close at hand. Over a decade ago, I was the head of product management at RELY Software, which was a logistics software vendor that was one of the early companies that built a web-based supply-chain visibility solution. I think the product we built at RELY was ahead of its time—it was a genuine SaaS product built in the early 2000’s, with all customers supported on a single hosted instance of the software (a so-called multi-tenant system). There were other software startups around with similar ideas and products (Celarix was one that had a lot of funding and that people seem to remember). There were some companies involved in supply-chain visibility that came at the problem from the ocean transportation side of logistics (e.g., GT Nexus). And there were some companies that came at the supply-chain visibility problem from the perspective of the railroad business, such as Transentric. The Enterprise Application Integration (EAI) vendors like WebMethods framed the supply chain visibility problem as a B2B integration problem, and proposed their own solutions based on their strengths (and now that I think about it, to an extent they were correct–more next time). And most of the Transportation Management System (TMS) vendors, such as GLog (now Oracle TM) and i2 (now JDA), developed modules that could gather logistics events from the supply chain, and show you the status of a shipment. They could also apply rules to the data they were gathering and alert you if certain conditions occurred, or didn’t occur, just like the system we built at RELY Software could.
If you add up the venture money invested in these (and similar) companies it comes to hundreds of millions of dollars. Seriously. It seems strange to say this now, but E-Logistics (as people called it at the time) was once a hot investment area. I don’t want to say that it all came to nothing, but there sure was a lot of money wasted. At the time there was an idea that supply-chain visibility systems could somehow foretell the future or be used to intelligently and automatically re-route a shipment, given the occurrence of some supply chain event. I still hear people say things like this, sometimes, but this possibility remains far in the future. Why? I’ll tell you next time.