Securing AI from Open Source Supply Chain Attacks
The AI arms race is in full swing as companies rush AI-based applications to market in order to gain a first-mover advantage. The inevitable result is a flood of poorly vetted software in which security has been relegated to an afterthought.
In the context of the Python ecosystem, which underlies the vast majority of ML/AI implementations, usage of pre-compiled binaries (Python wheels) pose the greatest supply chain security threat since community-created binaries have become an increasingly common attack vector for hackers. But even building your own wheels is no guarantee of safety, as vendors from Solarwinds to CircleCI have proven to their customers’ detriment.
So how can organizations empower data scientists and developers with the latest and greatest ML tools without sacrificing security? This webinar explores how you can get secure Python environments for AI projects, even in the cloud.
- Why securing ML/AI matters for enterprises and consumers
- Security risks of PyPI and using prebuilt binaries
- How you can automatically build Python binaries securely from source – technical demo
- Utilizing trusted ML runtimes on a secure, scalable and open platform like Cloudera Machine Learning (CML)
The days of data scientists needing to pull risky packages from public repos are over. Learn how to go from sandbox to production securely.
Date/Time: Thurs. Sep. 21, 10am PT / 1pm ET
Evan Cole, Sr. Solutions Engineer, ActiveState
At ActiveState, Evan works with enterprises to secure their open-source software supply chains while empowering dev teams to move faster. He is a certified AWS Cloud Architect with a research background in AI and big data engineering.
Dana Crane, Product Marketing Manager, ActiveState
With 25+ years in the software industry, Dana has both crossed and fallen into the chasm as a Product Marketer and Product Manager. When not playing basketball or writing blogs, his time is split between making products easier to use and easier to understand.