Sep 09, 2024 |

Thomson Reuters Labs: Behind the Scenes of our Machine Learning Operations Journey with Amazon Web Services

Danilo Tommasina, an engineer in Thomson Reuters Labs – the dedicated applied research division of Thomson Reuters – shares an in-depth look at collaborating with Amazon Web Services.

We hear about artificial intelligence (AI), machine learning (ML), generative AI and large language models daily. Advancements in these fields have been nothing short of astonishing, offering unprecedented possibilities to both businesses and individuals. 

While it’s easy to be captivated by flashy AI demonstrations, the challenge lies in developing reliable, scalable solutions that deliver tangible value to customers. At Thomson Reuters, we’ve embraced this challenge head-on.  

Effective development of AI solutions at scale requires a breadth of skills, software and infrastructure components with a significant depth of detailed knowledge. Building all this expertise in-house and keeping it up to date is barely possible. 

The Thomson Reuters collaboration with Amazon Web Services (AWS) has been important in allowing us to build a solid, customized toolchain while also providing feedback and proposals on how to optimize AWS offerings to better meet our needs. On the AWS blog, we shared extensive details on how Thomson Reuters achieved AI/ML innovation at pace with machine learning operations (MLOps) services on the AWS SageMaker ecosystem. 

The generative AI and MLOps spaces are still early stage and fluid. As an engineer within Thomson Reuters Labs – the dedicated applied research division of Thomson Reuters – I find it exciting to bring stability and solidity to this challenging, fast-paced environment. I hope you enjoy learning about Thomson Reuters Labs’ MLOps journey. 

This is a guest post from Danilo Tommasina, distinguished engineer, Thomson Reuters. 

Share