Dr Luís Moreira-Matias is Senior Director of Artificial Intelligence at sennder, Europe’s leading digital freight forwarder. Luís created sennAI, which oversees the creation of proprietary AI technology for the road logistics industry. During his career, Luís led 50+ FTEs across 4+ organisations to develop award-winning ML solutions. Luís has a world-class academic track record, with publications on ML/AI fundamentals, five patents and multiple keynotes worldwide.
In this interview, Luís explains how sennAI has revolutionised operations for import-export companies and carriers across Europe. He explains how sennAI has also optimised freight journeys, reducing the industry’s carbon footprint. Luís considers the challenges his team encountered when developing the model, and the advice he can offer B2B companies embarking on their own AI projects:
Sure. So in the past, the business model of a freight forwarder had many limitations. On one side, we had large companies that wanted to move bulk containers of goods from A to B. On the other are typically small trucking companies (90% of trucking companies in Europe have a fleet of 15 or less trucks). Because these gigantic companies and these very small companies operate with different business models, they need the broker, or the ‘freight forwarder’ to serve as a proxy. And traditionally, these brokers were contacted via analogue methods.
So there’d be dozens of phone calls, emails, faxes, etc. between these big companies, the broker, and the trucking companies. It was hard work to arrange services, to evaluate the legitimacy of the trucking businesses, and also complex from the admin side of giving feedback, receiving invoices and so on.
So what sennder did was to digitalise all this into one marketplace. This brings transparency and significantly improves the customer experience on both sides.
These large companies with cargo they need transporting, approach us and advertise a service request on the platform. The small truck companies on the other side can log in to the marketplace and bid, or simply take one of the services available.
One problem we encountered when going digital was how to manage the volume of bids we were receiving (we have 7,000-10,000 loads every day). So we introduced predictive pricing and enabled a ‘one button sell’, meaning we can create an optimal price for everyone involved: we can guarantee we have a brokerage margin, and also that the carrier has enough money to pay the cost of their trip and still do some business for their side.
Another big problem with the old system was finding the right load. Searching through 7,000-10,000 loads for the carrier was a major hassle. So, we worked to minimise this time spent on the platform, so we could maximise conversion rate.
We now have a recommendation system that can guarantee personalised experiences for our carriers, who are able to recommend three to ten loads at login time. This approach offers a marketplace that probably has no parallel in the European market today.
The challenge for us was we needed to convert, and maximise our margin as well. And these two things play against each other, because the larger the margin, the less likely you are to convert.
And as a B2B organisation, we had to be careful. Consider how B2C companies scale up by increasing the number of customers they have. If these companies make an error and a customer has a bad experience, they could lose that customer – but that’s not an issue because they have millions more. At sennder, however, we deal with these massive companies. If we lose their custom, we could lose up to 10% of our revenue. Consequently, we couldn’t approach experimentation and modelling with the same freedom as, say, a retail company.
Another challenge was how to create a model that would cope with fluctuations over time. We deal with dynamic systems, so what is a good recommendation today, may not be a good recommendation tomorrow; what’s a good price right now may not be a good price in five minutes. So we needed to incorporate into our model components that can evolve over time, offer predictions, and also evolve with the patterns we’re learning over time. We needed a model that could continuously learn.
With B2B companies in particular, the high price of mistakes gives an extra degree of complexity, and we have to deal with that on a daily basis.
We aim to strike a balance between these basic, common-sense business ideas, and also incorporate the complex models and data in the creation of our systems.
We continue to do a lot of empiric experimentation, but we’re also looking to incorporate a lot of business knowledge. So, we work very closely with the businesses to learn from them in the way we create the systems. Often in a direct way: which data sources to use, how to use them in the model, how to use an eligibility rule (i.e. matching geography, load and distance requirements with the appropriate carrier), and so on.
So there are mathematical ways to optimise: these complex, predictive models we can create from the data.
But we also need constraints put in place by business experts, who suggest more basic approaches. So we aim to strike a balance between these basic, common-sense business ideas, and also incorporate the complex models and data in the creation of our systems.
We also have strong monitoring and feedback loops: we do have errors, but we’re quick to react to those. We have mechanisms in place to counter them.
What I see on many machine learning teams is they have the engineers, and they have the manager. And the manager is seen as this mythological figure capable of doing everything: the hiring, the team’s outputs, the model results, performance reviews, one-to-ones, etc. And what ends up happening is either the manager gets burned out, or they end up being negligent in one of these areas.
Typically, one of the things that falls short is the stakeholder management, because machine learning teams aren’t well understood by the rest of the company. So, these teams end up feeling like they don’t need to connect with the business. Of course, this setup isn’t ideal.
So, one of the things that I introduced in sennder was to have a technical product owner on the team.
In practice, this means there’s no longer this mythological figure of a manager. Instead we have two humans with different responsibilities. We have this technical product owner who represents the customers in the team, who will build that rapport with the stakeholders and work together with them. This allows the other team members to focus on their work.
And if there’s an emergency in the company, this product owner is the one who will make sure the team drops everything and investigates it promptly. This person is responsible for roadmap building and backlog prioritisation, and they’re also the voice of the customer in the team.
When it comes to technical leadership: hiring, employer branding, performance reviews, this is where the engineering manager steps in. So, the manager doesn’t need to worry about what the team needs to do next, what’s the next priority. They just need to be concerned with the how: how will we be solving this problem?
I think this approach is a professionalisation of leadership in ML teams that’s greatly needed, especially knowing that these teams tend to be distanced from the business and less well-understood. I believe it’s a way to get them closer to success.
So each team has a scope, a certain set of tasks or problems to solve. But inevitably in an organisation there are overlaps between teams. So one thing we do is try and minimise those overlaps.
And we ensure that the product owner is responsible for that scope. So, whatever problems are prioritised by product leadership to be solved by AI teams, it will fall to that product owner to be the customer representative in the team and determine what is to be done next.
And we need to be very clear on what success looks like. For this reason, it’s important that the technical product owner has a background in computer science, ideally data science and ML. Because they must have this ability to translate business into technical language. I want this person to be able to be critical of the work of the engineers if needed, and challenge them when required.
So, this is a use case that has existed in the company since 2021. I joined in April 2022, and since then the algorithm has undergone several incarnations.
There were three challenges when perfecting the algorithm: one was how to reconcile the different flavours within our business. For example, we have contract business, and we also have spot business.
Another challenge was taking into account the different prices, the way they can fluctuate and the dynamic nature of the market. This makes our business model very unpredictable.
Thirdly, our business is different from geography to geography. Europe is a very regional business where each country has its own legislation on how to run road logistics.
So, we designed a model that’s able to learn. Typically, machine algorithms on these types of problems end up focusing a lot on outliers; in other words, they avoid having just one price that misses the mark, which is great. But then because the phenomenon is so stochastic, our pricing as a whole will be very off.
So what we do instead is we divide the learning regime into two components. One where we say, ‘okay, are we able to give a price to this load? Yes or no?’ And if we say yes, then we have another machine learning model that says: ‘this is the price for this load.’
At this point, we can trust the model to set automated prices on certain loads, and on those loads with no price, they’ll need to be bid on in order to be sold. This allows us to be much more aggressive on the machine learning model, and say for instance, ‘ignore outliers and change the loss function of the machine learning algorithm, and be much more focused on the normal points and on the absolute error.’
And this meant we could build a more accurate model for a smaller percentage; let’s say 80%, 90% of the points, rather than trying to be exact on 100% of the points. It’s a subtle change, but it made a significant difference in the end, in terms of an automated profit that this algorithm could drive for the company.
So, as a society we’ve had tough times economically, and supply chain businesses in particular have been struggling. However, sennder keeps growing their business and their revenue massively and organically, year on year. And this year was no exception.
One thing that changed for us this year is that we saw a large growth in the usage of our marketplace. And at the same time, we also saw a large increase in the usage of the volume of sales made throughout our pricing, or with the price driven by our algorithm.
So, these three factors are connected, and this can only happen if our customers are happy with our platform. Otherwise, they’ll not keep coming back to buy in the marketplace, and the shippers won’t keep coming to throw loads on the marketplace.
The positive impact varies according to geography, but overall, we can be talking about a growth of 30% or 40% year on year; in some areas, even more. And when it comes to the usage of the marketplace, 50% are the numbers in some geographies. And in terms of the profit achieved through sales (solely based on the machine learning algorithms that we have, the recommender and the dynamic pricing algorithm), from last year to this year, there’s a 500% difference.
Yeah. Absolutely. So, not all the business is driven through the platform. We still have a considerable part that’s done traditionally with the human in the loop. That percentage is decreasing, but it’s a component that will always exist.
One thing I want to point out is that we’re not designing AI algorithms to replace humans. What we’re trying to do is to enable humans to be more productive.
So, right now, a human operator can process 30 to 50 loads a day. We want to enable these same human operators to process 500 or 1000. There’s no way this person will be able to do that without the support of AI and machine learning algorithms that automate some decision-making in the process. And we allow these human operators to step in when a problem arises.
By doing this, we’re actually elevating the profession of these people so they can get better salaries, because they’re responsible for driving much more revenue per headcount.
So I think we’re achieving, through technology, a significant contribution to society and improvements to the way we live. And I would like to highlight that.
So, the baseline system you initially start with won’t be ML, or at least will be a very rudimentary ML. That’s because there are problems that must be dealt with before you risk anything more advanced. There are typically five main issues you need to solve when you design a new ML project:
Firstly you need the problem statement: how to go from the business problem to the machine learning problem, and what success will look like. This is absolutely key.
Second is methodology: the machine learning modelling. Which data you’ll use, which features you use, which label you use, which algorithm, and which loss function.
Third, offline evaluation. How do we determine that this model is good enough to go live? Do we have a baseline to compare it with, something really simple and intuitive to the business? If the machine-led method doesn’t outperform these baselines, we’ll go with the baseline because it’s easier to interpret and operate.
Fourth is the architecture: how your data is served to train your model; how automated your process is to train and deploy it, and how your model is served once it’s live.
And finally, live evaluation. Once your model is live, how do we determine that it’s working? This involves experimentation, but it also involves live monitoring and so on.
You must get these five right, otherwise you risk having a model that doesn’t scale. Or risk having a complex model with no way of evaluating whether it works. And once you have the five defined, you might discover you’ll need to tweak the methodology or the problem statement slightly, or the architecture, to get the right type of data source. So you iterate on these components.
This was a pattern we saw with our pricing model: we started with a linear model, then went for a more classic ensemble of decision trees, and now we have something more advanced.
You can see this process at work in other businesses, too. A great example is Amazon, with their sales forecasting model. They started in the early 2000s with a very simple linear model, then introduced the time series model, decision trees, more complex decision trees, until the deep learning model they have today. That transition took over 10 years. And the model will continue to evolve as their data evolves and their business evolves.
So, the key message is: don’t rush to find that optimal solution. It’s something you need to build on, to develop and iterate over time. There are lots of issues and considerations at play.
Yes, absolutely. Our vision is to achieve increased sustainability and to minimise emissions; to go beyond the smooth assignment of individual loads between shippers and carriers on our marketplace, to chartering contracts.
In practice, this means that we say to each carrier: ‘I want you to run your drivers and your trucks for one million kilometres for three months or six months, at this price, on these routes.’ So, instead of us selling each load individually to that carrier, we basically operate their assets for them.
And that means we can coordinate the scheduling for the drivers and set the scheduling for the trucks. We can specify that the trucks drive here, rest here, fuel there, pick up load here and so on, allowing the drivers to focus on what’s the priority for them, which is driving their trucks. So, it’s the ultimate customer experience for them.
But this also means we’ll be able to have a holistic optimisation of the truck fleet across Europe, and implement network optimisation at AI level. So we’ll have the AI and experts dedicated to maximising efficiency. And this is the ultimate strategy that will drive our company vision of enabling trucking companies to run their businesses more profitably and with lower emissions.
One thing I want to point out is that we’re not designing AI algorithms to replace humans. What we’re trying to do is to enable humans to be more productive.