Logo

    aiops

    Explore " aiops" with insightful episodes like "#187 GenAI RAG Details", "2024 Look Ahead - Using AI to Enable Personal Productivity", "BeeYond AI: A TI por trás da IA", "Ron Bodkin, ChainML founder and CEO and ex-Google exec, shares how to ensure AI is used to benefit humanity" and "The future of software engineering is powered by AIOps and open source" from podcasts like ""Embracing Digital Transformation", "The Cloudcast", "The Shift", "AI and the Future of Work" and "The Stack Overflow Podcast"" and more!

    Episodes (14)

    #187 GenAI RAG Details

    #187 GenAI RAG Details

    In part two of his interview with Eduardo Alvarez, Darren explores the use of GenAI LLMs and RAG (Retrieval Augmentation Generation) techniques to help organizations leverage the latest advancements in AI quickly and cost-effectively.

     Leveraging Language Model Chains


    In a landscape where accessible technologies are ubiquitous, operational efficiency sets an application apart. Be that as it may, handling an assortment of tasks with a single language model does not always yield optimal results, bringing us to the Language Model (LM) chains concept. 


    LM chains involve the integration of several models working simultaneously in a pipeline to improve user interaction with an application. Just as every task demands an integrating approach, every segment of your application may perform best with an individualized language model. Indeed, there's no one-size-fits-all policy when it comes to language models. Several real-world implementations are already capitalizing on the strength of multiple LMs working in harmony. 


     System Optimization and Data Veracity


    The holistic optimization of the system is an integral part of leveraging LM chains. Everything from choosing the perfect moment to deploy a large language model to selecting the ideal architecture for computing forms an essential part of this process. The right decisions can dramatically bolster system performance and improve operational efficiency.


    Integrating multiple models also opens novel avenues for research and development, particularly around data veracity within such setups. It poses fascinating challenges and opportunities ripe for exploration and discovery. 


     Maintaining Discreet Access to Data Privacy


    When discussing data privacy, it is essential to understand the balance between utilizing more extensive institutional databases and preserving private user information. Eduardo suggests maintaining discretionary control over database access, ensuring operational superiority and data privacy. 


     Rising Fusion of AI and Real Data Ops


    Predicting future trends, Eduardo anticipates a merger of accurate data and AI ops, which would resemble the blend of operational excellence and tool integration by configuration management engineers in the '90s. This blend translates into distributed heterogeneous computing in AI and shapes the future of AI ops.


     Concluding Thoughts


    Technology should invariably strive to simplify systems without sacrificing performance or efficiency. A thorough understanding of the available tools is a prerequisite to successfully leveraging them. Incorporating the LM chains in AI applications is a step in this direction, paving the way for an enriched user experience. Our conversation with Eduardo Alvarez underscores the importance of these insights in propelling the intriguing landscape of AI.

    2024 Look Ahead - Using AI to Enable Personal Productivity

    2024 Look Ahead - Using AI to Enable Personal Productivity

    Aaron and Brian talk to Mark Hinkle (@mrhinkle, Founder @peripety_labs) about enabling AI to drive personal productivity, AI experimentation and AIOps soft skills.

    SHOW: 783

    CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw

    NEW TO CLOUD? CHECK OUT - "CLOUDCAST BASICS"

    SHOW SPONSORS:

    • Reduce the complexities of protecting your workloads and applications in a multi-cloud environment. Panoptica provides comprehensive cloud workload protection integrated with API security to protect the entire application lifecycle.  Learn more about Panoptica at panoptica.app
    • CloudZero – Cloud Cost Visibility and Savings
    • ​​CloudZero provides immediate and ongoing savings with 100% visibility into your total cloud spend

    SHOW NOTES:


    Topic 1 - Welcome back to the show. Give our audience a little bit of your background and what you focus on now with Peripety Labs?

    Topic 2 - If you’re a business leader today, where are you focusing your priorities in terms of taking advantage of AI to improve the business?

    Topic 3 - If you’re a knowledge worker today, where are you focusing your priorities in terms of taking advantage of AI to improve your job?

    Topic 4 - What’s a good way to think about experimentation with AI vs. trying to start measuring results now? Any tips to help companies or workers accelerate their learning curves?

    Topic 5 - What are some of the biggest misconceptions you’re seeing across the industry right now around AI? Concerns, fears, anti-patterns, etc. 

    Topic 6 - We’re learned over many years (and technology trends) that the most successful ones align technology and people/culture/organization. What are the AI alignments of people and technology that are most needed? 


    FEEDBACK?

    BeeYond AI: A TI por trás da IA

    BeeYond AI: A TI por trás da IA

    Plataformas de Inteligência Artificial para operações de TI (as AIOPs) estão sendo adotadas em vários setores para otimizar e impulsionar o sucesso dos negócios digitais. Em finanças e bancos, entram na jornada para o Open Banking e também ajudam na detecção de fraudes e no monitoramento de transações em tempo real. Wagner Arnaut, CTO da IBM, e Fabio Napoli, CTO do Itaú Unibanco, explicam a AIOps a partir de casos de uso reais.

    Curadoria de links

    O site do Watson X, da IBM

    O relatório CDO Study 2023

    O relatório IBM Global AI Adoption Index 2022

    O guia de Princípios para IA ética, da IBM

    Da The Shift, o diretório de conteúdos sobre IA Generativa  

    Da The Shift, o diretório de conteúdo sobre Inteligência Artificial

    _____

    FALE CONOSCO

    Email: news@theshift.info

    _____

    ASSINE A THE SHIFT
    www.theshift.info

    Ron Bodkin, ChainML founder and CEO and ex-Google exec, shares how to ensure AI is used to benefit humanity

    Ron Bodkin, ChainML founder and CEO and ex-Google exec, shares how to ensure AI is used to benefit humanity

    Ron Bodkin is a self-described “serial entrepreneur focused on beneficial uses of AI”. Ron founded ChainML in April 2022 to make it easier to integrate AI models into applications. The AI we know today is immature in so many ways and many of them relate to how crude the tooling is for traditional developers building AI-first features. 

    The ChainML protocol is a cost-efficient, decentralized network built for compute-intensive applications running on blockchain technology. Prior to founding ChainML Ron had a distinguished entrepreneurial career having founded Think Big Analytics before it was eventually acquired by Teradata after which he spent three years in applied AI at Google. Ron is also an active investor and advisor and has degrees in Computer Science from McGill and MIT.

    Listen and learn...

    1. What led Ron to focus on how AI can have a positive impact on the world
    2. Why Hinton's right when he says "we've invented a superior form of learning"
    3. Where the current toolstack for building LLM apps is incredibly immature
    4. How to control the cost and performance of LLM apps
    5. Why human brains are inefficient
    6. Why the "effective cost of computing" is being reduced by 50% every year
    7. How we may get to AGI within 20 years
    8. Why proprietary datasets and commercial issues will slow down AI innovation
    9. The right way to regulate AI

    References in this episode...

    The future of software engineering is powered by AIOps and open source

    The future of software engineering is powered by AIOps and open source

    Over the past five years, Intuit went through a total cloud transformation—they closed the data centers, built out a modern SaaS development environment, and got cloud native with foundational building blocks like containers and Kubernetes. Now they are looking to continue transforming into an AI-driven organization that leverages the data they have to make their customers’ lives easier. Along the way, they realized that their internal systems have the same requirements to leverage the data they have for AI-driven insights. 

    Episode notes

    Wadher notes that Intuit uses development velocity, not developer velocity. The thinking is that an engineering org should focus on shipping products and features faster, not making individual devs more productive. 

    No, the robots aren’t coming for your jobs. Wadher says their AI strategy relies on helping experts make better insights. The goal is to arm those experts, not replace them. 

    In terms of sheer volume, the AI/ML program at Intuit is massive. They make 58 billion ML predictions daily, enable 730 million AI-driven customer interactions every year, and maintain over two million personalized AI models. 

    Intuit’s not here to hoard secrets. They’ve outsourced their DevOps pipeline tool, Argo. They found that a lot of companies used it for AI and data pipelines, and have recently launched Numaproj, which open sources a lot of the tools and capabilities that they use internally. 

    Congrats to Lifeboat badge winner Bill Karwin for their answer to Understanding MySQL licensing. 

    Automating Decisions, Processes and ML without Code

    Automating Decisions, Processes and ML without Code

    Loren Goodman (CTO @InRule) talks about the intersection of AI/ML, DevOps, and Low Code/NoCode and making complex decisions using automation.

    SHOW: 604

    CLOUD NEWS OF THE WEEK -
    http://bit.ly/cloudcast-cnotw

    CHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"

    SHOW SPONSORS:


    SHOW NOTES:


    Topic 1 - Loren, welcome to the show. How about a brief introduction and background. 

    Topic 2 - Our audience shows an interest in AI/ML but at least as us as the hosts, we aren’t deep on the AI/ML side. So, let’s start at the start, what is a decision platform? How is AI/ML involved? And most importantly, why do companies need one?

    Topic 3 - As I understand this, this is about an intersection of AI/ML with DevOps and then providing access to non-technical SME’s as well? Is this the correct way to think about this? How does this relate to automating complex decisions and how does this help the business? Does this stray into the topic of LowCode/NoCode?

    Topic 4 - Anytime I think about automation, especially highly automated CI/CD pipelines, I worry about a ripple effect. Is it true to say the more complex the decisions, the higher the risk that something will not go as planned? Any recommendations or best practices/lessons learned you would like to share?

    Topic 5 - This has me thinking about another trend, we keep hearing anything with Ops tacked on the back. Of course DevOps, then DevSecOps, AIOps, etc. Is this DecisionOps? What industries are typically drawn to these types of platforms? I can see this benefiting highly regulated industries. 

    Topic 6 - As this may be a new topic for some, if folks want to learn more, how would you suggest they get started in this space?


    FEEDBACK?

    Comet ML Office Hours 15 | 23MAY2021

    Comet ML Office Hours 15 | 23MAY2021
    Checkout the episode recap here: https://www.comet.ml/site/comet-office-hours-recap-for-may-23rd-and-may-30th/ Comet provides a self-hosted and cloud-based meta machine learning platform allowing data scientists and teams to track, compare, explain and optimize experiments and models. Backed by thousands of users and multiple Fortune 100 companies, Comet provides insights and data to build better, more accurate AI models while improving productivity, collaboration and visibility across teams. Register for future sessions here: http://bit.ly/comet-ml-oh Checkout Comet ML by visiting: https://www.comet.ml/ Checkout the latest FREE e-book from Comet - Building Effective Machine Learning Teams: https://bit.ly/3bWrJ0O Or on Twitter: https://twitter.com/CometML On YouTube: https://www.youtube.com/channel/UCmN63HKvfXSCS-UwVwmK8Hw Vote in the data community content creators awards! http://bit.ly/data-creators-awards Check it out and don't forget to register for Friday Happy Hour sessions: http://bit.ly/adsoh Watch the episode on YouTube here: https://www.youtube.com/playlist?list=PLx-pFwty92wJoWzoO7WlfaM7iYB8qjm Checkout the interview I had with the Super Data Science Podcast: https://www.superdatascience.com/podcast/landing-your-data-science-dream-job [00:01:08] How to deal with the confusion you face while learning new things [00:04:09] Dealing with failed data science projects [00:06:58] How do you go about making sure you collect the right kind of data in the first place? [00:09:29] Start with three questions [00:11:59] The balance between learning technical stuff and learning how to solve actual problems [00:15:28] How are you overcoming learning struggles? [00:18:07] Learning vs doing [00:21:58] When the data doesn’t support much predictive power [00:28:04] Everyone will become a data scientist, eventually [00:30:38] The importance of domain knowledge [00:35:03] Define failure up front [00:37:56] Is low code the end of data science as we know it? [00:42:19] Reproducibility [00:48:25] Adam with some controversy [01:02:36] How do you do personal inventory on your skills?

    Putting AI into IT Operations in 2021

    Putting AI into IT Operations in 2021

    IT professionals have thick skins that deflects most marketing, especially when it comes to AI. After all, your phone apparently uses AI to frame the perfect picture (really?) and your TV knows and adapts to your taste in viewing thanks to "advanced machine learning" (nope).

    So selling AI into ITops must be an uphill struggle. But with the right message and — most importantly — demonstrable use-cases and proof, that's just what Digitate does. Jayanti Murti talks to us about perceptions of AI, the verticals most ready to adopt, and how machine-speed iterative algorithms actually improve the daily lives of IT staff.

    Digitate launched in 2015, and was built by software engineers and more PhDs than you ever expect to see in a single room at any one time. A platform that's "made its bones", then, but we still have questions!

    Jayanti on LinkedIn can be found here:

    https://www.linkedin.com/in/jayantivsnmurty/

    Joe's page is here:

    https://www.linkedin.com/in/josephedwardgreen/

    The past, present, and future of AIOps with Dr. Helen Gu, CTO and Founder of InsightFinder

    The past, present, and future of AIOps with Dr. Helen Gu, CTO and Founder of InsightFinder

    In this special episode with Dr. Helen Gu, author or co-author of 80+ academic papers on distributed computing and AI for systems management, we discuss the past, present, and future of AI for IT Operations.

    Listen and learn...

    1. How and why the technology behind common problems like image recognition fails when applied to machine data
    2. Where Helen and her team were nine years ahead of Amazon
    3. Why unsupervised machine learning is required to predict and prevent system downtime
    4. Why machine learning is well-suited to anomaly detection for machine data
    5. The next technology breakthrough that will eliminate sleepless nights for developers

    Research papers referenced on today's episode:

    1. Helen's 10-year Symposium on Cloud Computing award about elastic resource scaling
    2. Fixing the hang bug problem


    We asked execs about the future of AI in IT operations. What they said may surprise you.

    We asked execs about the future of AI in IT operations. What they said may surprise you.

    This week's special episode of AI and the Future of Work features execs discussing how they use AI to reduce downtime. More important, they share how, as leaders, they're navigating the complicated relationship between humans and machines.

    Listen and learn from some of the best:

    1. Joel Eagle on how McDonald's thinks about managing data at scale to improve the guest experience.
    2. Mark Settle, seven-time CIO and best-selling author of Truth from the Valley, on how the shift to cloud-based apps has changed user behavior.
    3. Ray Lippig, Program Manager at J.B. Hunt, on how anomaly detection is changing the logistics business.
    4. Sean Barker, CEO of cloudEQ, on how AI means we can architect cloud solutions designed for resilience.

    Plus, hear three predictions for how AI will change work in the next 18 months.

    For the first time, we also published a video version of this week's podcast on YouTube. Enjoy!

    Distributed Search with Elastic Field CTO Steve Mayzak

    Distributed Search with Elastic Field CTO Steve Mayzak

    Carter Morgan (@carterthecomic) and Stephanie Wong (@stephr_wong) are back this week with another episode of the SaaS Podcast! This week, we're speaking with Field CTO Steve Mayzak about Elastic and how they're perfecting the thorough search of data in distributed systems.

    Steve starts the show explaining the role of a Field CTO and how he helps develop strategies for the future of Elastic. He describes the function of Elastic in detail, sharing that the search capabilities they've developed can do anything from satisfy your curiosity on Wikipedia to protecting submarines from attack. Whether it's online shopping, sifting through years of web logs, or detecting threats to your project, people expect to find exactly what they're looking for fast, and Elastic has the power to do that.

    Later, we talk about why open source is so important to Elastic and how it led to the development of programs like Logstash, software that can manipulate and search through logs, Kibana, software that improves UI and simplifies data organization for any system, and Endpoint, security software that searches out threats to the system. Steve talks about real-life use cases of these Elastic products and how companies like car manufacturers can use the system to predict machine maintenance, thus decreasing downtime. We discuss the future of Elastic and learn how these use cases influence future product development.

    We wrap up the show with a detailed discussion of the Elastic stack, including the technologies they use to keep the system running smoothly, ensure data is well indexed and organized, and perfect the scalability of Elastic. Steve talks about roadblocks the company has faced and the solutions they found, as well as Covid-specific changes they've made and how they're helping other companies deal with issues caused by the global pandemic. He offers advice for companies now facing this work-from-home scenario and ways to run efficient teams no matter their location, as well as the future of data management.

    Episode Links:
    Elastic
    Elasticsearch
    Logstash
    Kibana
    Elastic Security


    Easy Data Scaling with DataStax Chief Product Officer Ed Anuff

    Easy Data Scaling with DataStax Chief Product Officer Ed Anuff

    Stephanie Wong (@stephr_wong) and Carter Morgan (@carterthecomic) are back with another episode of How I Launched this: A SaaS Story! This week, they're talking to DataStax Chief Product Officer, Ed Anuff, about how the company has managed to create massively scalable databases capable of running on any cloud platform. 

    Ed starts the show by explaining the inspiration for DataStax and why they chose Apache Cassandra to build their software. At DataStax, they recognized the need for packaged software to allow businesses to harness the power of data securely, and their appreciation for open source software meant Apache Cassandra was the perfect fit.

    Later in the show, we talk about the evolution of DataStax and Cassandra and how they've changed as technology has evolved. Ed talks about their newest offering, DataStax Astra, explaining the ways it allows Cassandra to work efficiently in the cloud. Specifically, Ed details Astra's scale-out capability and how it's handled on different cloud platforms, including Google.

    The show wraps up with a look at the future of SaaS and the technologies Ed believes will  become more and more popular. He makes the point that as technology continues to advance, developers will need to think more about integration of products and services.

    Episode Links:
    DataStax
    Apache Cassandra
    DataStax Astra
    Envoy
    DataStax on GitHub
    DataStax Blog
    DataStax Academy
    DataStax Documentation



    Welcome to How I Launched This: A SaaS Story

    Welcome to How I Launched This: A SaaS Story

    Welcome to How I Launched This: A SaaS Story from Google Cloud. SaaS embraces the full potential of the Cloud and is arguably changing the way organizations work. Each episode, Stephanie Wong (@stephr_wong) and Carter Morgan (@carterthecomic) will go in-depth on a different SaaS story, sharing the initial challenges, approaches, and ultimate impact various global leaders have faced implementing their own SaaS solutions. Stay tuned to learn more about SaaS, the technologies being employed, other lessons learned and stories from industry leaders all over the world. Stories like sustainable futures, full-stack monitoring, AIOps and headless commerce.

    If you want to keep up to date with the latest on SaaS, Google Cloud and this show, subscribe wherever you listen to podcasts, follow Google Cloud on Twitter and give us a share. Reach out to Carter and Stephanie with suggestions, comments and questions to keep the conversation going online. Check out each episodes show notes for more information, rate and review our show on Apple Podcast and stay tuned for our frequent releases. 

    We'll see you next time.

    AIOps for Security and Breach Protection

    AIOps for Security and Breach Protection

    SHOW: 389

    DESCRIPTION: Brian talks with Adam Hunt (CTO and Chief Data Scientist at @RiskIQ) about the breadth of security breaches, how AI/ML can play a role if used properly, and immediate steps to improve protection for breaches.

    SHOW SPONSOR LINKS:

    CLOUD NEWS OF THE WEEK

    AWS Announced Open Distro for ElasticSearch

    https://aws.amazon.com/blogs/opensource/keeping-open-source-open-open-distro-for-elasticsearch/

    Rebuttals or Commentary on Open Distro for ElasticSearch  

    Continuous Delivery Foundation launched by Linux Foundation
    https://devops.com/the-linux-foundation-launches-continuous-delivery-foundation/

    VC Investment in the Service Mesh space
    Bouyant ($10M)
    Tetrate ($12.5M

    SHOW INTERVIEW LINKS:

    SHOW NOTES:

    Topic 1 - Welcome to the show. You have quite an interesting and impressive background. Can you talk a little bit about your work in academia prior to RiskIQ, and then what drew you to this space?

    Topic 2 - RiskIQ focuses on helping companies mitigate massive security attacks. For people that don’t live in the security domain, can you give us a sense of what one of these attacks and breaches look like? 

    Topic 3 - Can you give us a sense of how many of these massive attacks are utilizing new techniques, or is it variants of existing techniques, or just old techniques looking for new (vulnerable) targets? And are there tools to help companies understand how to prioritize against these?  

    Topic 4 - Where are we in the industry in terms of the intersection of security best practices that IT teams can control, and when ML-driven capabilities can augment for more proactive security? 

    Topic 5 - What are some of the things that you’re recommending to companies that are helping to make immediate impacts to them preventing or reducing massive breaches?

    Feedback?