Securing AI from Open Source Supply Chain Attacks – Thank You
The AI arms race is in full swing as companies rush AI-based applications to market in order to gain a first-mover advantage. The inevitable result is a flood of poorly vetted software in which security has been relegated to an afterthought.
In the context of the Python ecosystem, which underlies the vast majority of ML/AI implementations, usage of pre-compiled binaries (Python wheels) pose the greatest supply chain security threat since community-created binaries have become an increasingly common attack vector for hackers. But even building your own wheels is no guarantee of safety, as vendors from Solarwinds to CircleCI have proven to their customers’ detriment.
So how can organizations empower data scientists and developers with the latest and greatest ML tools without sacrificing security? This webinar explores how you can get secure Python environments for AI projects, even in the cloud.
We cover:
- Why securing ML/AI matters for enterprises and consumers
- Security risks of PyPI and using prebuilt binaries
- How you can automatically build Python binaries securely from source – technical demo
- Utilizing trusted ML runtimes on a secure, scalable and open platform like Cloudera Machine Learning (CML)
The days of data scientists needing to pull risky packages from public repos are over. Learn how to go from sandbox to production securely.
Get a Personalized Demo: Book a 30 min. session with our solution experts to see how ActiveState helps you seamlessly secure Python and other open source environments from sandbox to production