We kicked off 2024 by meeting the data science leaders' community in the Bay Area! The gathering was an excellent opportunity for AI leaders to engage in insightful discussions, share their top-of-mind challenges, and connect with their peers, and for top leaders in AI to with their peers.
Meeting Top AI Leaders for Dinner
We love meeting our community to learn what’s essential for them and the challenges they go through leading teams in AI.
During the dinner, we had prominent companies in the Bay Area, including J.P. Morgan Chase, Teladoc Health, Walmart, DocuSign, Epsilon, Varo, MUFG (Bank of Tokyo-Mitsubishi), Achieve, Gilead, and Clario. The dinner allowed AI leaders to connect with their peers and bring unique insights and perspectives on tackling their leadership challenges.
Top leaders connecting and sharing their top challenges and tips
The dinner was attended by leaders from the healthcare, technology, and finance sectors and discussions encompassed a broad spectrum of subjects, highlighting the unique challenges each industry faces.
Disclaimer:
The views expressed in this article are summarized, anonymized, and aggregated representations of personal opinions. They do not reflect the views, policies, or positions of specific companies or organizations with which the individuals might be associated. The information presented is intended for general informational purposes only.
Our conversations touched on a wide range of topics, including accelerating the transition from model development to production, managing risks in machine learning projects, and enhancing the productivity of data scientists.
The conversations were moderated by Remy Thellier & Alexander Gorgin. We’ve consolidated their insights below, including their challenges and tips.
Accelerating Model to Production
1. Validation Time and Alignment Across Teams
Challenges: One of the common challenges AI leaders operating in regulated environments face is the excessive validation time required before a model can be put into production—which in some companies used to go up to 6 months, severely hampering progress—along with the persistent difficulty in ensuring alignment across engineering, data science, business, and validation teams throughout the AI model's lifecycle.
Tips: To address the validation time challenge, the consensus among AI leaders is to start working in parallel with the modeling team from the beginning, as this collaborative approach introduced early in the process accelerates validation and proactively identifies and resolves potential issues.
Furthermore, AI leaders stress the need for transparency and ongoing alignment across teams, with this alignment extending beyond the initial production phase to include thorough change management, monitoring, contingency plans, and Model Risk Management (MRM). Maintaining open communication and collaboration between the various teams is thus essential to effectively address these validation and alignment challenges.
2. Overcoming Engineering Bottlenecks and Bridging MLOps Gaps
Challenges: AI teams often need help with bottlenecks due to heavy dependency on engineering support for deployment, with waiting for engineering resources significantly slowing the production process. Additionally, in some organizations, data science teams may need more expertise to make production-ready models, causing delays and inefficiencies in deployment resulting from knowledge gaps in Machine Learning Operations (MLOps).
Tips: AI leaders emphasize reducing dependency on engineering by investing in a dedicated ML platform that empowers data scientists and ML engineers to deploy models more independently without relying on traditional engineering support. Having team members well-versed in technologies like Docker and Kubernetes can expedite deployment through increased autonomy and technical proficiency, enabling teams to bypass scheduling constraints.
Leaders also stress the importance of investing in MLOps to streamline collaboration between data scientists, ML engineers, and operations by creating cross-functional teams where engineers and data scientists work together closely. Recognizing engineering teams' contributions, despite coming second in the spotlight, is crucial for successful deployment.
3. Bridging the Knowledge Gap between Scientists and Data Scientists
Challenge: In the healthcare sector, AI leaders encounter a significant challenge: bridging the knowledge gap between scientists and data scientists. This gap is particularly evident in using data science to drive clinical trials.
Tip: Bringing in individuals with a background in medical science and providing them with data science training is more feasible than the other way around. Hire scientists internally with expertise in the field and then train them in data science. This approach not only narrows the gap in scientific understanding but also proves beneficial when determining task priorities.
Managing the productivity of data scientists
5. Strategies for Successful AI Model Deployment and Team Focus
Balancing the challenge of maintaining a team focus with the imperative of successful AI model deployment requires a comprehensive approach, encompassing the following key tips as recommended by the AI leaders:
- Robust Documentation: Meticulously document every aspect of your AI development journey, from data preprocessing to model architecture choices, to ensure transparency and reproducibility. Consider using tools like Vectice for streamlined documentation.
- Foster Self-Sufficiency: Empower your team by investing in cloud infrastructure and a model deployment system, reducing reliance on external resources. Provide education and training to enhance their skills and foster a culture of alignment through knowledge sharing.
- Continuous Learning: Recognize that each iteration in the model-building process presents an opportunity for refinement and improvement. Encourage a culture of continuous learning within your team to stay up-to-date with the latest advancements.
- Align with Organizational Goals: Emphasize that the actual value of data science lies not just in producing impressive results but in making a meaningful impact on the company's objectives. Ensure that team efforts are aligned with organizational goals to drive positive change.
By incorporating these elements into your AI deployment strategy, you can effectively address the challenges of maintaining team focus and achieving successful model deployment.
Managing risks in ML initiatives
6. Balancing ML Initiatives and Responsible AI: Strategies for Success
Challenge: One of the significant challenges experienced by leaders in machine learning is the delicate and intricate balance between advancing multiple ML initiatives and effectively fulfilling Responsible AI requests. To address this challenge effectively, the leaders agreed on the following three tips:
Tips:
- Prioritize Risk Assessment: Quantify the risk associated with each ML initiative and Responsible AI request. AI leaders can determine where to focus their efforts by assessing and ranking the potential risks. This quantification allows for a prioritized approach, ensuring that the most critical areas receive attention and resources while maintaining pace.
- Synchronize with Business Objectives: AI leaders must ensure ML initiatives stay in sync with business objectives by engaging stakeholders from inception through deployment. Regular collaboration and feedback loops foster alignment, streamline decision-making and enhance project success. This includes having centralized and embedded teams closer to the business to reach an MVP faster.
- Standardized CI/CD Pipelines: Another aspect of mitigating risks is implementing CI/CD pipelines tailored for MLOps and governance. These pipelines guarantee responsible ML model deployment, automating vital aspects from the outset and ensuring alignment with Responsible AI principles.
What about LLMs?
Remy Thellier, Vectice’s Head of Growth and creator of the Vectice community, share the current status of LLMs discussed:
“It’s interesting to notice that at the moment in many enterprises, large language models are predominantly employed to explore new use cases rather than replace existing machine learning and data science models.
In addition, LLMs are extensively utilized for existing use cases at the feature creation phases as they can significantly reduce the effort required for data labeling. Depending on the environment (GCP, Azure, or AWS), the LLMs trending at the moment Mistral, GPT-3.5/4, and Anthropic. Finally, managing the risk of deploying an LLM remains a top challenge in regulated sectors.”
Want to be a part of our AI Leaders‘ Community?
You'll gain access to resources, events, and dinners exclusively for directors, VPs, and C-level data science leaders.
- Online Gatherings: Discuss your most pressing challenges with up to 5 peers in Vectice-facilitated video calls.
- Dinners: Join us for dinner with 10 top leaders of data science. We organize quarterly dinners in our 11 geographical clusters.
- Apply to speak at events: Our events are recorded with multiple high-profile speakers and a maximum of 30 attendees. They happen during the week after work hours to make it convenient for everyone.
Join the community ➡️ https://www.vectice.com/community