Tag Archives: Supply Chain and Logistics Technology

Supply Chain Visibility – Impossible Dreams (Part I)

“Often, it’s better to go with simple solutions that require teamwork and individual accountability rather than trying to automate the hell out of something.”

When I was in graduate school I studied with Prof. Dick Conway, who is considered one of the “fathers of scheduling” for his early work on production scheduling.  In one class, we formed small teams and were given the challenge of designing a manufacturing system for an electronic product.  I can’t remember what the product was, exactly, but it was roughly as complex as, say, an office copier.  My team, and most others, envisioned a highly automated production line, run by software that would provide a sort of electronic Kanban system that would send signals down the production line to the various work stations.  This (overly) complex software system was supposed to balance the production steps so that work-in-process inventory could be kept to a minimum.  In retrospect, it was a kind of sci-fi solution that would have never worked well, especially given the sophistication of supply-chain systems at that time (the early 1990’s), and given the amount of processing power you could buy on a typical CPU at the time.  But apart from those limitations, Prof. Conway showed us that the complexity of our solutions was both detrimental, and unnecessary.

It turned out that the best solution was not to use an automated production line at all.  Instead, it was better to use small teams of workers that built a unit, essentially from start to finish, testing the unit after each step in the manufacturing process.  Part of our course of study was to drive around the Northeast in a couple of vans with the Professor, and tour various manufacturing facilities and discuss what we saw.  During one of those trips, sure enough, we visited a factory that was building their products exactly like this—with small teams of workers taking ownership of their work–with good results.  This approach had many benefits vs. the fancy approach that we (the grad students) proposed: assuring that defects were caught promptly and fixed before they started piling up at the next step in the production process, minimizing work in process inventory, and allowing the team to take ownership over the quality of their work and productivity.  Maybe my team’s fancy, automated approach could have worked, but it probably would have failed, and it wasn’t the best approach to solving the problem anyway.  Often, it’s better to go with simple solutions that require teamwork and individual accountability rather than trying to automate the hell out of something.

I’m reminded of this story when people talk about other attempts to apply computers to solve problems that are best solved with less elaborate approaches.  In this case I’m talking about supply-chain systems.  Or at least the idea that supply-chain systems will one day have all seeing, all knowing “visibility” into the supply-chain, and an ability to respond to supply-chain events with automated, intelligent actions.  Maybe these systems will come along one day, but I don’t think that day is close at hand.  Over a decade ago, I was the head of product management at RELY Software, which was a logistics software vendor that was one of the early companies that built a web-based supply-chain visibility solution.  I think the product we built at RELY was ahead of its time—it was a genuine SaaS product built in the early 2000’s, with all customers supported on a single hosted instance of the software (a so-called multi-tenant system).  There were other software startups around with similar ideas and products (Celarix was one that had a lot of funding and that people seem to remember).  There were some companies involved in supply-chain visibility that came at the problem from the ocean transportation side of logistics (e.g., GT Nexus).  And there were some companies that came at the supply-chain visibility problem from the perspective of the railroad business, such as Transentric.  The Enterprise Application Integration (EAI) vendors like WebMethods framed the supply chain visibility problem as a B2B integration problem, and proposed their own solutions based on their strengths (and now that I think about it, to an extent they were correct–more next time).  And most of the Transportation Management System (TMS) vendors, such as GLog (now Oracle TM) and i2 (now JDA), developed modules that could gather logistics events from the supply chain, and show you the status of a shipment.   They could also apply rules to the data they were gathering and alert you if certain conditions occurred, or didn’t occur, just like the system we built at RELY Software could.

If you add up the venture money invested in these (and similar) companies it comes to hundreds of millions of dollars.  Seriously.  It seems strange to say this now, but E-Logistics (as people called it at the time) was once a hot investment area.  I don’t want to say that it all came to nothing, but there sure was a lot of money wasted.  At the time there was an idea that supply-chain visibility systems could somehow foretell the future or be used to intelligently and automatically re-route a shipment, given the occurrence of some supply chain event.  I still hear people say things like this, sometimes, but this possibility remains far in the future.   Why?   I’ll tell you next time.

Advertisements
Tagged ,

Snarky Comments on Cloud SLAs (Part II)

In my last post I wrote about the problems with Cloud Service Level Agreements (SLAs), and why they tend to be less useful than one would like.  OK, so what can you do about it?

1) Ask Questions — If your service provider offers an SLA, study it and ask questions.  “Is that all you can promise?  What’s your historical performance?  Can you show me some documentation of that from your monitoring tools?  Have you had any downtime incidents in the last five years?  What were the root causes, and what did you do to address those?”  The discussion will be useful in filling in your picture of their capabilities.  In the end, an SLA is just a number (or a few numbers), and it doesn’t give you much insight into the provider’s ability to deliver.  An SLA is a bit like a car warranty—it’s nice to know you have it, but your real hope is that it won’t be needed and the thing won’t have any significant breakdowns.

2) Try to Negotiate — Good luck on this, but give it a try.  I’ve found that, normally, the salesman doesn’t have the discretion to change the penalties in the SLA.  However, if the vendor doesn’t offer you an SLA, just asking might get you somewhere.  In the logistics software space, there is a company called SMC3 that creates a system used for calculating the rate of a Less than Truck Load (LTL) shipment, and most companies in the industry use this product.  They recently converted from a traditional licensed software model to a cloud model.  I was at their conference in Atlanta a few days ago, and asked about their SLA policy.  They don’t promise an out-of-the-box SLA, but they say that they’ll negotiate one on occasion.  So it seems that if you are a big enough customer to them, you’ve got a chance of getting an SLA in writing.  You’re not doing this to negotiate the SLA up to a level with which the vendor isn’t comfortable, because that isn’t going to change their real-world ability to deliver on the SLA.  But you want to understand what they can deliver, and get comfortable (or not) with this level of performance, assuming the vendor’s service is critical to your business.  In the case of SMC3, if you are moving LTL shipments for a living and you need to quote prices to your customers, then you’re really depending on SMC3 to deliver, because if their service goes down you can’t provide price quotes to customers.  So you owe it to yourself to understand their capabilities.

3) Look at Audit Reports – If the vendor has been audited (e.g., using the SAS 70 standard) then you should get a copy of the audit report.  Read it.  DO NOT just treat this as a check mark—“great, they passed an audit.”  Look where that got investors in Enron and Worldcom.  Like most standards of this nature, the audit “control objectives” for SAS 70 are essentially whatever the cloud vendor says they are.  So the cloud vendor tells the audit firm their control objectives, and the auditor audits to check that there is a reasonable assurance they would achieve those objectives (the auditor will state very clearly, though, that this is not a guarantee).  The auditor will make sure the business objectives that the cloud vendor uses are not completely stupid, but I find that a lot of people assume (wrongly) that SAS 70 is some kind of seal of approval that the vendor is doing “all the right things”.  SAS 70, for example, says nothing about providing disaster recovery (DR) capabilities.  A vendor can provide zero DR, and pass an SAS 70 audit with flying colors, simply by not stating any control objectives regarding DR, but showing compliance with whatever control objectives they do have.

4) Do Your Own Audit — While it might make you feel good to host your systems in-house, because you feel you have more control, chances are that you can’t do as good a job as a strong cloud vendor.  You probably know someone who feels comfortable driving a car, since they control it, but who doesn’t like flying in an airplane because they have to trust the pilot, even though flying is statistically safer than driving a car.  So maybe you should find out if your cloud vendor knows how to “run an airline”.  If the service provider is critical to you, arrange an audit where you visit them and understand their capabilities.  Are they doing the basics?  Do they have multiple power supplies on each server, or only on some of them?  Do they have multiple NICs on every server?  Ask them to show you.  Do they have current maintenance contracts with their key vendors (server hardware, network hardware, etc.) or did they “forget” to renew them?  How fast would that vendor promise to rush in a part required to fix their box if it went down?  Next business day? Hmmmm.  One of my customers once did an audit like this on my data center, and while it was painful at the time, ultimately it was beneficial for me and my team, and for the customer, since they had a much deeper knowledge of our real capabilities, and in the end they grew comfortable that we knew what we were doing.

Tagged ,

Snarky Comments on Cloud SLAs

When I read articles about contracting with cloud service providers (or other IT service providers such as colocation providers) often I see a lot of emphasis on Service Level Agreements (SLAs).  That’s the wisdom that they offer you: make sure you have a solid SLA with your cloud service provider.  The only problem with SLAs is that while, theoretically, they should be a useful tool for managing your cloud vendors, in practice they are a bit weak.   Here’s why:

1) The standard penalties you are likely to get in your contract with a cloud vendor if they miss their SLA are miniscule compared with the pain and suffering you are going to experience if the service goes down.  One cloud provider with whom I’m familiar (I won’t name them, but their SLA is pretty typical) will provide a 5% credit if their uptime is between 99-99.5% (their SLA guarantee is 99.5%).  So let’s say you are spending $5000 per month with that service provider, and that during one month they are down for 7 hours, and you can’t service your customers for most of a business day.  What do they give you?  Let’s see….a month with 31 days has 744 hours in it.  So if they go down for 7 hours, we’re just above 99% uptime.  So they owe you 5% of $5,000, which is $250.  ARE YOU KIDDING ME???  Your business is out one day’s revenues, the CEO is on the phone asking to speak with “shithead”, and your cloud service provider is going to give you $250?

2) Typically, cloud vendors lowball the SLA.  A typical number you’ll see in the marketplace now is uptime SLAs in the range of 99.5%.  So over the course of a year, they could be down for…365*0.005 = 1.825 days?  There’s a good chance I could hit 99.5% hosting your software on my laptop.   (Well, as long as I promise not to fire up iTunes, which seems to crash my Windows 7 laptop.  Yeah, yeah, I know.  It rocks on a Mac.)  By the way, that 99.5% target doesn’t include scheduled downtime.  So as long as they announce the downtime ahead of time, it doesn’t count.  By the way, ask your vendors how much notice they provide.  Some vendors only give about a day’s notice for maintenance windows, which makes it difficult if you have users that use the system during off-hours, when the vendor is doing the maintenance.  So anyway, the cloud vendors just aren’t setting very aggressive SLAs.  If you ask the vendor, they’ll probably admit that they are lowballing the SLA, or at least they’ll say, “Well, our long run average is certainly better than that.”

3) Some cloud vendors won’t even provide an SLA.  Salesforce.com is an example.  At least they won’t give you one if you’re not a mega-customer.  And apparently even for big customers they try to avoid doing so, to the point where according to some sources (http://www.online-crm.com/sla.htm), they won’t even call you back if you call and ask for an SLA!  So they are really just saying “look, we think our service level is good enough, but we’re not going to guarantee it, so just try it out and if you aren’t happy, then just leave.”  For certain kinds of services that’s probably good enough, but for “mission critical” services, it’s not a very reasonable answer.  One thing Salesforce does that I like a lot, though, is give you transparency to their current system status, and recent history.  Check this out:  http://trust.salesforce.com/trust/status/.  So you can have some feeling that they’ll be honest with you if there is an issue.  However, they don’t provide data on their long-run historical performance and they aren’t making you any promises.

So what do you need to do to?  Forget about SLAs?  Not entirely.  I’ll talk about some ideas for managing SLAs in my next post (people have been telling me I put too much information in my posts and I need to chop it up!)

Tagged ,

Cloud Software – 4 Step Plan

I’ve had a chance to do some thinking about what it takes to make a good cloud application, or even a good cloud software company.  Here, when I say “application”, I’m talking mainly about business applications, since I know more about those than about consumer apps.  But I also think that those of us who’ve worked more on the business applications side than on the consumer technology side can learn a lot from looking at consumer apps—this is where the trends are coming from, since all of your users are using iPhones and Amazon.com, so if your business applications don’t make sense for someone who has Amazon-like expectations, you’re going to have a problem soon.

So here’s a 4-step program for cloudifying your software company.  In short, you need to combat everything that sucks about traditional business applications.  Traditional enterprise systems are bloated with too much functionality, take too long to achieve value, cost too much, and take too long for the vendor to sell.  The implementation of these systems is akin to corporate root canal (I’m not the first one to say that—people seem to think the Wall Street Journal said it first, but I couldn’t find the article), and the risk of failure is significant.  This lengthens the sales cycle, and for a long time has been curtailing innovation in the software industry, since buyers wanted to play it safe and do what their colleagues were doing rather than be innovative, because the risks to their careers were too great if a major project failed.  So, given all of this, here are my four steps:

1)      Simplify the product – Ever used Basecamp?  The guys at 37 Signals took at look at MS Project and decided it was a bloated piece of crap that didn’t actually help you to…ummmm…manage projects.  So instead, they provided a bare-bones project management tool on the web.  They don’t provide fancy functionality such as letting you model the resources assigned to a project in order to “load balance” it.   (Ever tried to do that in MS Project?  Every time I’ve tried, I ended up determining that my project would get done in 2025 unless everyone worked 24 hours a day, in which case you could bring it in a couple of years.)  So your typical Project Management Office guy is going to think it’s a joke, because he went to the PMI course and they gave him a lot of fat binders that don’t look like Basecamp.  All Basecamp does is provide the ability for a team to collaborate over the web on a set of tasks.  That’s enough for a lot of people, and not just small company people.  Lots of individuals in larger companies use Basecamp too, and they just put it on their corporate credit card and expense it, completely circumventing their IT departments, who probably don’t even know it’s being used in the company.  So it turns out there’s a market for stripped down, simplified cloud services, and often times, less is more when it comes to business applications.

2)      Speed-up the time-to-value and remove barriers to adoption – A great example is Mint.com, the personal finance service.  These guys just nailed one thing, and as a result they sold the company only two years after its founding to Intuit for $170 Million.  What they did was to remove barriers to adoption (as compared with traditional solutions), and in the process reduced the time for the customer to see some value.  When Mint.com first appeared, their main competition was Intuit Quicken.   With Quicken, electronic banking was not supported for all banks, it was a pain to set up (as I recall, you needed to wait for something to happen at Citibank, so they could approve Quicken access to your account, and this could take a few days), and Quicken couldn’t access your brokerage accounts.  Even more annoying, you had to enter some transactions manually, and try to match them up to what was going on in your bank account.  Finally, the amounts Quicken thought were in your accounts would never quite match what was really in them, because you got some dividends from IBM and couldn’t be bothered to enter them into Quicken manually.  All of these things made it difficult to use Quicken and get any value out of it.  By contrast, if you logged into Mint, 15 minutes later you were staring at your total net worth on one screen.  Depressing, yes, but it was also amazing to users, who were able to see something useful right away.  With people using sites like Mint in their personal lives, users have little tolerance for applications that take a long time to show value.

3)      Think Freemium – Most great cloud services let you get started cheaply, or even for free.  This removes the risk for the customer that the damn thing won’t work as you expect, which is a very real risk with traditional enterprise software.  Dropbox is a great example of the “freemium” model.  They let you start out by storing 2 GB in the cloud for free.  They are betting that you’ll see value from the service quickly (point 2), dump a lot of files in there, and soon you’ll be paying for the service.  Well, it’s working.  I’m trying to figure out why I’d want to store things on my hard drive, rather than Dropbox.  The files look the same in Dropbox, except I can access them anywhere, which is really useful since I prefer to use my desktop at home, my laptop on the road, and sometimes a client gives me a PC to use when I’m at their site.  So anyway, back to my point—hook the customers for free and then charge them.  Just as in point 2, we want to remove barriers to adoption, and if we give the user a free trial, we’ve just knocked down one of the traditional barriers to the adoption of enterprise software—the high up-front cost.

4)      Rethink the Sales and Implementation Model – I need to credit Mark Benioff for this one.  The guys at 37 Signals also touch on it in their book, “Rework”, which is great if you haven’t read it.  In Benioff’s book, “Behind the Cloud”, he describes how the conventional wisdom in the enterprise software world is that you need a “bag carrying” sales force to sell the stuff, because it’s a big investment, the time-to-value is long, and therefore the customer is going to make the decision slowly, be risk-averse, and it’s going to go up to a high level in the customer’s organization for approval, which it turn further increases the lead time to make the sale, and not incidentally, the cost of sales for the software company.  And these high costs need to get paid by guess who?  In the end, the software buyer needs to pay for all of this, so it again reinforces the long decision cycle, with the sales process managed by “enterprise” sales people.  Benioff turned that conventional wisdom on its head, and found that “inside sales” (i.e., telesales) works for cloud apps.  It works because the up-front commitment the customer needs to make is low, so lower level managers can make a decision on the purchase, and since there’s little risk due to the ability to try it for free (point 3), and the time-to-value is quick (point 2).  So Benioff started out with telesales, and it worked, but a funny thing happened next.  Larger customers started buying Salesforce, and now with Force.com, you have a framework for customizing the platform.  So now you’ve got Salesforce Value Added Resellers (VARs) who are out there implementing customizations to Salesforce, which is a little like the traditional enterprise implementation model where VARs, consulting firms, and the vendors themselves would send high-priced consultants into the field to configure and customize the system for you.  But I don’t think it disproves the point—the sales and implementation models change with cloud software, and you need to move towards telesales and tele-implementation (a support team that works through a customer’s implementation issues over the phone and the web, rather than by using a traveling army of IT consultants).

The great thing about cloud applications is that they are opening up all kinds of opportunities to innovate, in a sector that’s been hidebound for the past decade.  Look at the supply chain software space—the companies that invented supply-chain software were innovators in the 1990’s–best-of-breed players who created the first transportation management systems, supply-chain planning systems, and warehouse management systems.  But in the 2000’s it was hard to be an innovator.  Customers were hesitant to take a chance.  “What if the system doesn’t work as advertised?  Do we really want to sign up for a long and tough implementation with an upstart vendor?”  Today, the cloud is changing that by reducing the costs and risks of going with an innovator.  That’s better for customers and better for software vendors as well (at least the innovative ones).

Tagged ,