Logo

    responsibleai

    Explore "responsibleai" with insightful episodes like "The OCI AI Portfolio", "AI and Cybersecurity with Karissa Breen and EJ Wise", "#3/6 Kathrin Schwan – „Ich sehe große Chancen für uns als Gesellschaft“", "AI Australia Presents: This Week in AI vol 11" and "AI Australia Presents: This Week in AI vol 10" from podcasts like ""Oracle University Podcast", "AI Australia Podcast", "STRAT TALKS", "AI Australia Podcast" and "AI Australia Podcast"" and more!

    Episodes (25)

    The OCI AI Portfolio

    The OCI AI Portfolio

    Oracle has been actively focusing on bringing AI to the enterprise at every layer of its tech stack, be it SaaS apps, AI services, infrastructure, or data.

    In this episode, hosts Lois Houston and Nikita Abraham, along with senior instructors Hemant Gahankari and Himanshu Raj, discuss OCI AI and Machine Learning services. They also go over some key OCI Data Science concepts and responsible AI principles.

    Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177

    Oracle University Learning Community: https://education.oracle.com/ou-community

    LinkedIn: https://www.linkedin.com/showcase/oracle-university/

    X (formerly Twitter): https://twitter.com/Oracle_Edu

    Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode.

    -------------------------------------------------------

    Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular
    Oracle technologies. Let’s get started!

    00:26

    Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hey everyone! In our last episode, we dove into Generative AI and Language Learning Models. 

    Lois: Yeah, that was an interesting one. But today, we’re going to discuss the AI and machine learning services offered by Oracle Cloud Infrastructure, and we’ll look at the OCI AI infrastructure.

    Nikita: I’m also going to try and squeeze in a couple of questions on a topic I’m really keen about, which is responsible AI. To take us through all of this, we have two of our colleagues, Hemant Gahankari and Himanshu Raj. Hemant is a Senior Principal OCI Instructor and Himanshu is a Senior Instructor on AI/ML. So, let’s get started!

    01:16

    Lois: Hi Hemant! We’re so excited to have you here! We know that Oracle has really been focusing on bringing AI to the enterprise at every layer of our stack. 

    Hemant: It all begins with data and infrastructure layers. OCI AI services consume data, and AI services, in turn, are consumed by applications. 

    This approach involves extensive investment from infrastructure to SaaS applications. Generative AI and massive scale models are the more recent steps. Oracle AI is the portfolio of cloud services for helping organizations use the data they may have for the business-specific uses. 

    Business applications consume AI and ML services. The foundation of AI services and ML services is data. AI services contain pre-built models for specific uses. Some of the AI services are pre-trained, and some can be additionally trained by the customer with their own data. 

    AI services can be consumed by calling the API for the service, passing in the data to be processed, and the service returns a result. There is no infrastructure to be managed for using AI services. 

    02:37

    Nikita: How do I access OCI AI services?

    Hemant: OCI AI services provide multiple methods for access. The most common method is the OCI Console. The OCI Console provides an easy to use, browser-based interface that enables access to notebook sessions and all the features of all the data science, as well as AI services. 

    The REST API provides access to service functionality but requires programming expertise. And API reference is provided in the product documentation. OCI also provides programming language SDKs for Java, Python, TypeScript, JavaScript, .Net, Go, and Ruby. The command line interface provides both quick access and full functionality without the need for scripting. 

    03:31

    Lois: Hemant, what are the types of OCI AI services that are available? 

    Hemant: OCI AI services is a collection of services with pre-built machine learning models that make it easier for developers to build a variety of business applications. The models can also be custom trained for more accurate business results. The different services provided are digital assistant, language, vision, speech, document understanding, anomaly detection. 

    04:03

    Lois: I know we’re going to talk about them in more detail in the next episode, but can you introduce us to OCI Language, Vision, and Speech?

    Hemant: OCI Language allows you to perform sophisticated text analysis at scale. Using the pre-trained and custom models, you can process unstructured text to extract insights without data science expertise. Pre-trained models include language detection, sentiment analysis, key phrase extraction, text classification, named entity recognition, and personal identifiable information detection. 

    Custom models can be trained for named entity recognition and text classification with domain-specific data sets. In text translation, natural machine translation is used to translate text across numerous languages. 

    Using OCI Vision, you can upload images to detect and classify objects in them. Pre-trained models and custom models are supported. In image analysis, pre-trained models perform object detection, image classification, and optical character recognition. In image analysis, custom models can perform custom object detection by detecting the location of custom objects in an image and providing a bounding box. 
    The OCI Speech service is used to convert media files to readable texts that's stored in JSON and SRT format. Speech enables you to easily convert media files containing human speech into highly exact text transcriptions. 

    05:52

    Nikita: That’s great. And what about document understanding and anomaly detection?

    Hemant: Using OCI document understanding, you can upload documents to detect and classify text and objects in them. You can process individual files or batches of documents. In OCR, document understanding can detect and recognize text in a document. In text extraction, document understanding provides the word level and line level text, and the bounding box, coordinates of where the text is found. 

    In key value extraction, document understanding extracts a predefined list of key value pairs of information from receipts, invoices, passports, and driver IDs. In table extraction, document understanding extracts content in tabular format, maintaining the row and column relationship of cells. In document classification, the document understanding classifies documents into different types. 

    The OCI Anomaly Detection service is a service that analyzes large volume of multivariate or univariate time series data. The Anomaly Detection service increases the reliability of businesses by monitoring their critical assets and detecting anomalies early with high precision. Anomaly Detection is the identification of rare items, events, or observations in data that differ significantly from the expectation. 

    07:34

    Nikita: Where is Anomaly Detection most useful?

    Hemant: The Anomaly Detection service is designed to help with analyzing large amounts of data and identifying the anomalies at the earliest possible time with maximum accuracy. Different sectors, such as utility, oil and gas, transportation, manufacturing, telecommunications, banking, and insurance use Anomaly Detection service for their day-to-day activities. 

    08:02

    Lois: Ok.. and the first OCI AI service you mentioned was digital assistant…

    Hemant: Oracle Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI driven interfaces that help users accomplish a variety of tasks with natural language conversations. When a user engages with the Digital Assistant, the Digital Assistant evaluates the user input and routes the conversation to and from the appropriate skills. 
    Digital Assistant greets the user upon access. Upon user requests, list what it can do and provide entry points into the given skills. It routes explicit user requests to the appropriate skills. And it also handles interruptions to flows and disambiguation. It also handles requests to exit the bot. 

    09:00

    Nikita: Excellent! Let’s bring Himanshu in to tell us about machine learning services. Hi Himanshu! Let’s talk about OCI Data Science. Can you tell us a bit about it?

    Himanshu: OCI Data Science is the cloud service focused on serving the data scientist throughout the full machine learning life cycle with support for Python and open source. 

    The service has many features, such as model catalog, projects, JupyterLab notebook, model deployment, model training, management, model explanation, open source libraries, and AutoML. 

    09:35
    Lois: Himanshu, what are the core principles of OCI Data Science? 

    Himanshu: There are three core principles of OCI Data Science. The first one, accelerated. The first principle is about accelerating the work of the individual data scientist. OCI Data Science provides data scientists with open source libraries along with easy access to a range of compute power without having to manage any infrastructure. It also includes Oracle's own library to help streamline many aspects of their work. 
    The second principle is collaborative. It goes beyond an individual data scientist’s productivity to enable data science teams to work together. This is done through the sharing of assets, reducing duplicative work, and putting reproducibility and auditability of models for collaboration and risk management. 

    Third is enterprise grade. That means it's integrated with all the OCI Security and access protocols. The underlying infrastructure is fully managed. The customer does not have to think about provisioning compute and storage. And the service handles all the maintenance, patching, and upgrades so user can focus on solving business problems with data science. 

    10:50

    Nikita: Let’s drill down into the specifics of OCI Data Science. So far, we know it’s cloud service to rapidly build, train, deploy, and manage machine learning models. But who can use it? Where is it? And how is it used?

    Himanshu: It serves data scientists and data science teams throughout the full machine learning life cycle. 

    Users work in a familiar JupyterLab notebook interface, where they write Python code. And how it is used? So users preserve their models in the model catalog and deploy their models to a managed infrastructure. 

    11:25

    Lois: Walk us through some of the key terminology that’s used.

    Himanshu: Some of the important product terminology of OCI Data Science are projects. The projects are containers that enable data science teams to organize their work. They represent collaborative work spaces for organizing and documenting data science assets, such as notebook sessions and models. 

    Note that tenancy can have as many projects as needed without limits. Now, this notebook session is where the data scientists work. Notebook sessions provide a JupyterLab environment with pre-installed open source libraries and the ability to add others. Notebook sessions are interactive coding environment for building and training models. 

    Notebook sessions run in a managed infrastructure and the user can select CPU or GPU, the compute shape, and amount of storage without having to do any manual provisioning. The other important feature is Conda environment. It's an open source environment and package management system and was created for Python programs. 

    12:33

    Nikita: What is a Conda environment used for?

    Himanshu: It is used in the service to quickly install, run, and update packages and their dependencies. Conda easily creates, saves, loads, and switches between environments in your notebooks sessions.

    12:46

    Nikita: Earlier, you spoke about the support for Python in OCI Data Science. Is there a dedicated library?

    Himanshu: Oracle's Accelerated Data Science ADS SDK is a Python library that is included as part of OCI Data Science. 
    ADS has many functions and objects that automate or simplify the steps in the data science workflow, including connecting to data, exploring, and visualizing data. Training a model with AutoML, evaluating models, and explaining models. In addition, ADS provides a simple interface to access the data science service mode model catalog and other OCI services, including object storage. 

    13:24

    Lois: I also hear a lot about models. What are models?

    Himanshu: Models define a mathematical representation of your data and business process. You create models in notebooks, sessions, inside projects. 

    13:36

    Lois: What are some other important terminologies related to models?

    Himanshu: The next terminology is model catalog. The model catalog is a place to store, track, share, and manage models. 
    The model catalog is a centralized and managed repository of model artifacts. A stored model includes metadata about the provenance of the model, including Git-related information and the script. Our notebook used to push the model to the catalog. Models stored in the model catalog can be shared across members of a team, and they can be loaded back into a notebook session. 

    The next one is model deployments. Model deployments allow you to deploy models stored in the model catalog as HTTP endpoints on managed infrastructure. 

    14:24

    Lois: So, how do you operationalize these models?

    Himanshu: Deploying machine learning models as web applications, HTTP API endpoints, serving predictions in real time is the most common way to operationalize models. HTTP endpoints or the API endpoints are flexible and can serve requests for the model predictions. Data science jobs enable you to define and run a repeatable machine learning tasks on fully managed infrastructure. 

    Nikita: Thanks for that, Himanshu. 

    14:57

    Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security, artificial intelligence, and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification.

    Visit mylearn.oracle.com to get started. 

    15:25

    Nikita: Welcome back! The Oracle AI Stack consists of AI services and machine learning services, and these services are built using AI infrastructure. So, let’s move on to that. Hemant, what are the components of OCI AI Infrastructure?
    Hemant: OCI AI Infrastructure is mainly composed of GPU-based instances. Instances can be virtual machines or bare metal machines. High performance cluster networking that allows instances to communicate to each other. Super clusters are a massive network of GPU instances with multiple petabytes per second of bandwidth. And a variety of fully managed storage options from a single byte to exabytes without upfront provisioning are also available. 

    16:14

    Lois: Can we explore each of these components a little more? First, tell us, why do we need GPUs?

    Hemant: ML and AI needs lots of repetitive computations to be made on huge amounts of data. Parallel computing on GPUs is designed for many processes at the same time. A GPU is a piece of hardware that is incredibly good in performing computations. 
    GPU has thousands of lightweight cores, all working on their share of data in parallel. This gives them the ability to crunch through extremely large data set at tremendous speed. 

    16:54

    Nikita: And what are the GPU instances offered by OCI?

    Hemant: GPU instances are ideally suited for model training and inference. Bare metal and virtual machine compute instances powered by NVIDIA GPUs H100, A100, A10, and V100 are made available by OCI. 

    17:14

    Nikita: So how do we choose what to train from these different GPU options? 

    Hemant: For large scale AI training, data analytics, and high performance computing, bare metal instances BM 8 X NVIDIA H100 and BM 8 X NVIDIA A100 can be used. 

    These provide up to nine times faster AI training and 30 times higher acceleration for AI inferencing. The other bare metal and virtual machines are used for small AI training, inference, streaming, gaming, and virtual desktop infrastructure. 

    17:53

    Lois: And why would someone choose the OCI AI stack over its counterparts?

    Hemant: Oracle offers all the features and is the most cost effective option when compared to its counterparts. 

    For example, BM GPU 4.8 version 2 instance costs just $4 per hour and is used by many customers. 

    Superclusters are a massive network with multiple petabytes per second of bandwidth. It can scale up to 4,096 OCI bare metal instances with 32,768 GPUs. 

    We also have a choice of bare metal A100 or H100 GPU instances, and we can select a variety of storage options, like object store, or block store, or even file system. For networking speeds, we can reach 1,600 GB per second with A100 GPUs and 3,200 GB per second with H100 GPUs. 

    With OCI storage, we can select local SSD up to four NVMe drives, block storage up to 32 terabytes per volume, object storage up to 10 terabytes per object, file systems up to eight exabyte per file system. OCI File system employs five replicated storage located in different fault domains to provide redundancy for resilient data protection. 

    HPC file systems, such as BeeGFS and many others are also offered. OCI HPC file systems are available on Oracle Cloud Marketplace and make it easy to deploy a variety of high performance file servers. 

    19:50

    Lois: I think a discussion on AI would be incomplete if we don’t talk about responsible AI. We’re using AI more and more every day, but can we actually trust it?

    Hemant: For us to trust AI, it must be driven by ethics that guide us as well.

    Nikita: And do we have some principles that guide the use of AI?
    Hemant: AI should be lawful, complying with all applicable laws and regulations. AI should be ethical, that is it should ensure adherence to ethical principles and values that we uphold as humans. And AI should be robust, both from a technical and social perspective. Because even with the good intentions, AI systems can cause unintentional harm.

    AI systems do not operate in a lawless world. A number of legally binding rules at national and international level apply or are relevant to the development, deployment, and use of AI systems today. The law not only prohibits certain actions but also enables others, like protecting rights of minorities or protecting environment. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. For instance, the medical device regulation in the health care sector. 

    In AI context, equality entails that the systems’ operations cannot generate unfairly biased outputs. And while we adopt AI, citizens right should also be protected. 

    21:30

    Lois: Ok, but how do we derive AI ethics from these?

    Hemant: There are three main principles. 
    AI should be used to help humans and allow for oversight. It should never cause physical or social harm. Decisions taken by AI should be transparent and fair, and also should be explainable. AI that follows the AI ethical principles is responsible AI. 

    So if we map the AI ethical principles to responsible AI requirements, these will be like, AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight. AI systems and environments in which they operate must be safe and secure, they must be technically robust, and should not be open to malicious use. 

    The development, and deployment, and use of AI systems must be fair, ensuring equal and just distribution of both benefits and costs. AI should be free from unfair bias and discrimination. Decisions taken by AI to the extent possible should be explainable to those directly and indirectly affected. 

    23:01

    Nikita: This is all great, but what does a typical responsible AI implementation process look like? 

    Hemant: First, a governance needs to be put in place. Second, develop a set of policies and procedures to be followed. And once implemented, ensure compliance by regular monitoring and evaluation. 

    Lois: And this is all managed by developers?

    Hemant: Typical roles that are involved in the implementation cycles are developers, deployers, and end users of the AI. 

    23:35

    Nikita: Can we talk about AI specifically in health care? How do we ensure that there is fairness and no bias?

    Hemant: AI systems are only as good as the data that they are trained on. If that data is predominantly from one gender or racial group, the AI systems might not perform as well on data from other groups. 

    24:00

    Lois: Yeah, and there’s also the issue of ensuring transparency, right?

    Hemant: AI systems often make decisions based on complex algorithms that are difficult for humans to understand. As a result, patients and health care providers can have difficulty trusting the decisions made by the AI. AI systems must be regularly evaluated to ensure that they are performing as intended and not causing harm to patients. 

    24:29

    Nikita: Thank you, Hemant and Himanshu, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. 

    Lois: That’s right, Niki. You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll get into the OCI AI Services we discussed today and talk about them in more detail. Until then, this is Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    25:05

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

    AI and Cybersecurity with Karissa Breen and EJ Wise

    AI and Cybersecurity with Karissa Breen and EJ Wise

    Welcome back to the AI Australia podcast for 2024, with your hosts Natalie Rouse and Kobi Leins.

    We are joined by two experts in the field of cybersecurity - "The Voice of Cyber" multimedia journalist Karissa Breen and internationally recognised cyber law expert Emma (EJ) Wise.

    We start off learning about Aboriginal Birthing Trees, then talk about the ethics of self-driving cars, international regulation & frameworks, education, the use of autonomous technology in the military, cyber law, the use of AI in manipulating upcoming elections, information warfare, the naming of the Russian hacker responsible for the Medibank hack, and the opportunity for AI to drive innovation on both sides of the fence - for organisations protecting themselves and for hackers. We make it clear we do not condone cyber criminality!

    Unfortunately we did lose the last half of EJ's audio, which is a technical travesty because she did have many fascinating things to say - we will get her back to share more of her great thoughts and experiences at a later date. 

     

    Links:

    Karissa Breen

    Emma (EJ) Wise

    Aboriginal Birthing Tree

    Missy Cummings on Tesla Autopilot

    Automating the Banality and Radicality of Evil

    Moral Machine

     

    #3/6 Kathrin Schwan – „Ich sehe große Chancen für uns als Gesellschaft“

    #3/6 Kathrin Schwan – „Ich sehe große Chancen für uns als Gesellschaft“
    In STRAT TALKS Episode 6 erläutert die KI-Expertin Kathrin Schwan, Managing Director Data & AI Network bei Accenture DACH, die Kraft von Generative AI für Unternehmen und Wirtschaft. Kathrin gibt strategische Ratschläge für Unternehmen, die sich mit KI und GenAI befassen, und teilt praktische Ansätze und Best Practices. Zudem erläutert sie das wichtige Konzept von Responsible AI sowie die Rolle von Trust und Security. Die Folge schließt mit einem Blick in die Zukunft, in der Kathrin Künstlicher Intelligenz eine wichtige Rolle bei der Lösung der großen gesellschaftlichen Fragen zuschreibt. Kathrin Schwan ist Managing Director bei Accenture und leitet den Bereich Data & AI in DACH mit über 1.200 Datenexperten. Kathrin Schwan verfügt über mehr als 20 Jahre Erfahrung in der gesamten Wertschöpfungskette von Data & Analytics. Vor ihrer Tätigkeit bei Accenture war sie Analytics Business MD für Europe & Americas bei BCG GAMMA und VP Data Science bei Criteo Inc. Kathrin hat einen Masterabschluss der Universität Witten/Herdecke.

    AI Australia Presents: This Week in AI vol 11

    AI Australia Presents: This Week in AI vol 11

    In our last episode for 2023, Kobi & Nat discuss the following recent developments around the world and how they relate to Australia:

    • The EU AI Act, which has made it another step closer to reality, albeit with some conditions for regulating foundation models that not everyone is happy with.
    • Kobi has been reading Nancy Levison's new book, which looks at large historical accidents and unpacks what that looks like from a human perspective.
    • We wonder how the Act will plug into the new international standards coming out, and whether that might result in some more granular teeth for the Act.
    • Microsoft has been quietly talking to industrial organisations in the US, leading us to wonder how the Hollywood Writer's strike will impact other union negotiations around the world.
    • The attribution of job losses to AI or automation is proving difficult, and will make future union negotiations even more interesting.
    • Kobi talks about the history and evolution of boards over time, which leads us to circle back to the Open AI board purging and reflect on recent information that has come out about why exactly Sam Altman was fired in the first place.

    AI Australia Presents: This Week in AI vol 10

    AI Australia Presents: This Week in AI vol 10

    This week join Kobi and Natalie as they discuss some perhaps seemingly-unrelated global trends, and ponder how AI may or may not impact or amplify them in the future.

    We raise questions about:

    • There are around 40 upcoming elections happening around the world in 2024; have we already seen advancements in AI play a role in driving a shift to the right, and what might we expect to see in the next year?
    • There is a trend where local media is becoming increasingly scarce - what does that mean for our local collective understandings & conversations, and global polarisation?
    • The lack of diversity (gender and otherwise) in technology media (and the technology industry in general) presents a major hurdle for language models of the future as it leads to a lack of variety in the type of voices on record today, and hence in training data tomorrow. 

    Links:

    Where are all the godmothers of AI? In the Guardian

    A 'Trump moment' in the Netherlands shows that Europe still has a populist problem - CNN

    Brace for elections: 40 countries are voting in 2024 - Bloomberg

    The Guardian view on local journalism's decline: bad news for democracy

    AI Australia Presents: This Week in AI vol 9

    AI Australia Presents: This Week in AI vol 9

    This week in AI has been a week like no other - join Natalie & Kobi to mull over the back and forth at OpenAI over the last week as we delve into what happened and what might it all mean. We are saddened about the departure of all the women on the OpenAI board, but wonder if the previous academic-focussed board had found themselves out of their depth as the value of the company grew. We are more than a little curious about the eventual movie that will no doubt be made about this highly dramatic saga!

    We also talk about the importance of governance around not just data but AI, including processes & accountability rather than just technical aspects.

    No links this week because there is not enough space on the internet to link to all of the articles covering the happenings of this last week.

    Effectively Adopting Responsible AI In Organisations with Tony Hibbert

    Effectively Adopting Responsible AI In Organisations with Tony Hibbert

    In this episode, our guest Tony share his insights on responsible ai, its challenges and how experts can implement responsible AI the best way to their organisations effectively.

    Tony Hibbert is an experienced AI Governance Expert, helping financial services and tech companies bring order to the chaos, by navigating the path to responsible AI.

    Currently, he is an AI Governance Expert at ING Bank, specializing in data protection (19+ years), cybersecurity (15+ years), risk management (4+ years), and ethics (4+ years) in the AI domain.

    If you want to be our guest, or you know some one who would be a great guest on our show, just send your email to info@globalriskconsult.com with a subject line “Global Risk Community Show” and give a brief explanation of what topic you would like to to talk about and we will be in touch with you asap.

    Maori Data Sovereignty with Megan Tapsell, Dr Karaitiana Taiuru JP, and Assoc Prof Maui Hudson

    Maori Data Sovereignty with Megan Tapsell, Dr Karaitiana Taiuru JP, and Assoc Prof Maui Hudson

    Welcome to another episode from the AI Australia podcast, with your hosts Natalie Rouse and Kobi Leins.

    Continuing the conversation from our last interview on the importance of Indigenous data rights, this time we cross the ditch to Aotearoa New Zealand, discussing Maori data sovereignty and artificial intelligence with three special guests: Megan Tapsell, Associate Professor Maui Hudson, and Dr Karaitiana Taiuru.

    They discuss their roles and responsibilities in advocating for indigenous data rights, the challenges and opportunities they face in this area, and the importance of data sovereignty for Maori communities.

    Towards the end, they call upon corporations to involve and fairly pay Indigenous people when working with data that impacts their communities.

    Apologies for some audio quality issues at times - we experienced some technical difficulties in recording, but are very pleased to be able to bring you most of what we felt was an important & valuable conversation.

    Links:

    Megan Tapsell on Linkedin

    Maui Hudson on Linkedin

    Karaitiana Taiuru on Linkedin

    Augmenting Gov with AI

    Augmenting Gov with AI
    Artificial intelligence is transforming how governments operate, but adopting new tech can be a challenge. In this week's episode, we chat with Micah Guadet, a trailblazer in using AI ethically and responsibly in the public sector.

    Micah developed the first course on adopting AI in government, sharing how to navigate policy, fear, and change management. He's an evangelist for the potential of AI to improve public safety and operations, while also maintaining public trust. W

    e discuss Micah's journey to becoming an AI guru, his innovative ideas, and the projects he's working on to usher governments into the AI age smoothly and beneficially. Tune in for an insightful conversation on the public sector's AI revolution

    Check out Micah's new AI course for public sector personnel: https://www.civicinnovation.ai/courses/chatgpt

    Support our podcast!
    Everything EM Weekly: www.thereadinesslab.com/em-weekly-links
    EM Weekly shirts and merch: https://www.thereadinesslab.com/shop
    The Readiness Lab: https://www.thereadinesslab.com/
    Doberman Emergency Management: www.dobermanemg.com
    Connect with me! https://www.linkedin.com/in/zborst/

    Major Endorsements:
    L3Harris Technologies' BeOn PPT App. Learn more about this amazing product here: www.l3harris.com

    Doberman Emergency Management Group provides subject matter experts in planning and training: www.dobermanemg.com

    #AI #artificialintelligence #machinelearning #government #publicsector #publicsafety #ethics #responsibleAI #change management #innovation #futureofwork #emergingtech #civictech #govtech #chatGPT

    Eyes of AI with Khoa Le

    Eyes of AI with Khoa Le

    It’s no secret that AI is enhancing efficiency across nearly every industry in the world today. Unsurprisingly, that includes dentistry and pathology detection. Khoa Le is the founder and CEO of Eyes of AI, which leverages the power of technology to detect, analyze, and diagnose dental X-Ray images. In this episode, he shares his story, from building his skillset to establishing the idea for Eyes of AI and serving his client base today. We touch on preparation and tracking results, discuss the tumultuous waters of navigating ethics and privacy, and explore the benefits of bringing such a high level of accuracy to differential diagnosis. Khoa gets candid about some of the biggest challenges he has faced while building his company and offers his perspective on the growing influence of AI in Australia and beyond. In closing, he shares his thoughts on the human element of the medical industry and why it will always be necessary. Tune in today to hear all this and more!

    Key Points From This Episode:

    • Khoa Le's career trajectory which led to the founding of Eyes of AI.

    • How Eyes of AI leverages the power of technology for detection, analysis, and diagnosis of dental X-Ray images.

    • What Eyes of AI client base largely consists of.

    • Forming the idea for the product.

    • The company flagship: 3D images. 

    • Preparation and results.

    • Navigating ethics and privacy by using de-identified data.

    • Benefits of the 95% accuracy.

    • What the output provides and why it cannot be considered a final diagnosis.

    • The biggest challenges Khoa has faced in getting Eyes of AI off the ground.

    • His perspective on how the AI industry is growing in Australia.

    • Why a personal connection will always be relevant to the medical field.

    Links Mentioned in Today’s Episode:

    Khoa Le on LinkedIn

    Eyes of AI on Instagram

    Eyes of AI

    Eyes of AI Journal

    Natalie Rouse on LinkedIn

    Dr Kobi Leins

    Dr Kobi Leins on LinkedIn

    Dr Kobi Leins on Twitter

    Eliiza

     

    Indigenous Data Rights with Distinguished Professor Maggie Walter

    Indigenous Data Rights with Distinguished Professor Maggie Walter

    It is long overdue that we, as organisations, and people in organisations, start to question our own thinking. And as a global society need to move toward making a systemic paradigm shift when it comes to Indigenous research and all that it encompasses. Today, we are joined by a very special guest, Distinguished Professor Maggie Walter, coming to us today from Nipaluna, Tasmania. Maggie is a Palawa woman who descends from the Pairrebenne people of north-eastern Tasmania and is also a member of the Tasmanian Briggs Family. She is a Distinguished Professor Emerita of Sociology at the University of Tasmania and is still heavily involved in Indigenous data sovereignty space and anything Indigenous data related. Join our very timely conversation as Maggie takes us on a deep dive into Indigenous data sovereignty 101, the two sides of data collection, and the problem with AI and using existing data sets. We talk about different challenges with funding and how there is a massive requirement for a paradigm shift in Indigenous research. This is a loaded episode with great points of conversation including a global look at relationships with indigenes, why presuming research equals change is dangerous, and why Maggie is more after ontological disturbance rather than money and time. You don’t want to miss this episode, so start listening, and join the conversation.

    Key Points From This Episode:

    • An introduction to Maggie Walter, a Palawa woman and descendent of the Pairrebennes.

    • Maggie runs us through Indigenous data sovereignty 101. 

    • The statistical indigene and what colonisation has done to us. 

    • She explains the two sides of data collection. (7:50)

    • The problem with AI and using existing data sets: we always end as the problem. 

    • Thoughts on levers and tools aimed at shifting and solving the Indigenous data problem.

    • The starting point, humans, and why AI is scary as it relates to Indigenous data (collection).

    • She shares the challenges faced with Indigenous data collection. 

    • The challenges with funding and the required paradigm shift for Indigenous research.

    • Why Indigenous research projects can’t be concentrated in health and should diversify.

    • Her encouragement to challenge and flip mindsets with relationships to first peoples.

    • We take a global look at other countries and their relationships with Indigenous peoples. 

    • The danger of presuming research equals change. 

    • Maggie divulges why she doesn’t do advisory committees anymore. 

    • The need for ethics codes. 

    • Why systemic change can only be done in increments.

    • A discussion on who owns Indigenous data and the benefits, or lack thereof, of AI.

    • Maggie tells us why she’s after ontologic disturbance rather than money and time.

    Links Mentioned in Today’s Episode:

    Maggie Walter on LinkedIn

    UN Declaration on the Rights of Indigenous Peoples

    Closing the Gap Memorandum

    Professor Sally Engle Merry

    AIATSIS Code of Ethics 2020

    Natalie Rouse on LinkedIn

    Dr Kobi Leins

    Dr Kobi Leins on LinkedIn

    Dr Kobi Leins on Twitter

    Eliiza

     

    Women in AI APAC Interview Series episode 9 - Amanda Princi

    Women in AI APAC Interview Series episode 9 - Amanda Princi

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Amanda Princi, Head of Data Enablement at Transurban.

    Bio:

    As the Head of Data Enablement, Amanda is responsible for driving data culture at Transurban.
    Amanda leads a team that enables people to easily manage data, confidently employ data use and maximise data’s value in a safe, ethical way.


    Amanda is passionate about a workplace free from data jargon, where teams have positive data
    experiences and can embed data thinking into daily activities at any level of the organisation.
    Prior to Transurban, Amanda’s career began as a Banker at National Australia Bank (NAB). Over 10 years, Amanda transformed her banking career into a data career, by leveraging data to help millions of customers and achieve significant impact. Amanda’s determination to learn all things data and evangelise data use, led to Amanda holding multiple senior data leadership positions specialising in data analytics and insight, data management and governance.

    Links:

    Amanda Princi on LinkedIn

    Women in AI APAC Interview Series episode 8 - Kelly Brough

    Women in AI APAC Interview Series episode 8 - Kelly Brough

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Kelly Brough, Managing Director, ANZ Applied Intelligence at Accenture ANZ.

    Bio:

    Kelly leads the Applied Intelligence Network for Accenture ANZ.

    Responding to businesses increasingly turning to AI to enable their growth, productivity, and
    creativity, Kelly leads a talented team of strategists, scientists, and value innovators to support
    our clients in designing and executing data led transformations. She is passionate about the
    opportunities for society being unleashed by AI technologies and committed to focussing
    energy on the responsible application of AI to build trust and protect people and organisations
    while delivering new value outcomes.

    Kelly has over 25 years experience across both industry and consulting building digital and
    data businesses in the Retail, Media, and Technology sectors. Prior to joining Accenture, Kelly
    held executive roles including Chief Digital Officer at Sensis (Telstra), Global Digital Director at
    Lonely Planet, CEO Allegran Online Dating (Daily Mail), and Director at AOL Europe.

    Kelly has an MBA from INSEAD, an MS in Environmental Engineering from University of
    Virginia, and completed her undergraduate degree in Engineering Science at Harvard
    University.

    Links:

    Kelly Brough on Linkedin

     

    Women in AI APAC Interview Series episode 7 - Kendall Jenner

    Women in AI APAC Interview Series episode 7 - Kendall Jenner

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Kendall Jenner, Research Assistant at STELaRLab, Lockheed Martin's first research laboratory outside of the US.

    Bio:

    As a Research Assistant, Kendall performs research into Artificial Intelligence and Machine
    Learning, investigating the latest developments in the field and how it can be applied to defence
    applications.

    Before entering the workforce, Kendall studied a Bachelor of Science in Theoretical and Experimental Physics at the University of Adelaide, followed by a Masters of Philosophy in machine learning for gravitational wave astrophysics (which is still currently ongoing) with OzGrav – the Australian Research CoE for Gravitational Wave Discovery. Since starting with STELaRLab and Lockheed Martin Australia in the Graduate Program early last year, Kendall has worked on a variety of projects. She has investigated using machine learning for prognostics and health management of aircraft, link prediction in knowledge graphs, creating simulated data for a variety of other projects in the lab, and processing of overhead imagery. She enjoys the variety of tasks that her and her team partake in, and find that the work she does at STELaRLaB is the perfect balance between interesting and exciting state-of-the-art research, and the applications and practicality of implementing new solutions.

    Kendall's first introduction to machine learning was during a research internship with the University of Adelaide High Energy Astrophysics group, where the group was tasked with using a random forest classifier to classify blazars, which are a type of active galaxy. This project piqued her interest in AI, and so she pursued a career in it, leading to some extra-curricular activities with Adept at the University of Adelaide, work experience and internships, choice of topic for her postgraduate research degree, and now to her current role with STELaRLab.

    Outside of work and university, Kendall is a professional footballer (soccer) for West Torrens Birkalla SC. She enjoys gardening, playing boardgames with friends, and generally doing outdoor activities.

    Links:

    Kendall Jenner on LinkedIn

    Women in AI APAC Interview Series episode 6 - Deborah Henderson

    Women in AI APAC Interview Series episode 6 - Deborah Henderson

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Deborah Henderson, Data & Cloud Partner at KPMG Australia.

    Bio:

    Deborah is a Partner in KPMG Australia’s Data and Cloud practice and has significant experience in technology specialising in data and analytics. She has worked internationally and locally in Australia, helping organisations across multiple industries lift their data and analytics maturity to enhance the value of their data assets. Deborah combines her vast technical background and experience in leading large scale data transformation programs across the public and private sectors and multiple industries to provide a holistic perspective on how organisations can drive value from data. Deborah is KPMG Australia’s Data and Cloud Education sector lead,  and also heads up our ESG data offerings whilst continuing to deliver data services for clients in multiple industries.

    Deborah is recognised as a leader in delivering successful business outcomes across the entire data value chain from strategy through to implementation and has delivered keynotes at conferences and presented as a guest lecturer at some of the top universities in Australia.  Deborah leverages her expertise in using modern cloud technology and human centred design to uplift and transform data and analytics capability for our clients, helping them navigate complexity to extract more organisational value from the investments they make and close the gap from data to impact.

    She believes that organisations need to be data fit to be future smart and that those organisations poised to use evidence based practice to make informed decisions and inspire action will have a lasting and positive impact that enables people, organisations, and society to thrive.

    Links:

    Deborah Henderson on Linkedin

    Deborah Henderson at KPMG Australia

    Women in AI APAC Interview Series episode 5 - Dr Rena Logothetis

    Women in AI APAC Interview Series episode 5 - Dr Rena Logothetis

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Dr Irini (Rena) Logothetis, Research Fellow at the Applied AI Institute at Deakin University.

    Bio:

    From software development in Melbourne to managing an island hotel in Greece, showing designs at Fashion Week in Athens, to biomechanical engineering to medical and assistive technology research at an AI institute; Rena Logothetis has had an unconventional career path.

    Researcher - Deakin University’s Applied AI Institute (A2I2)

    Specialising in system design and analysis, projects including:

    • TR&R trauma support system in The Alfred Hospital Melbourne
    • Development of a multi-use Smart Lab designed to test assistive technology to allow older people to live better, safer, longer lives at home
    • A world-first autonomous wheelchair system
    • A world-first falls prevention research project

     

    PhD – Engineering Biomechanics

    Co-Founder & Managing Director, Avioti Fashion Ed in Athens, Greece

    Co-Founder & Managing Director of a hotel on the island of Lefkas, Greece

    Links:

    Dr Rena Logothetis on LinkedIn

    Dr Rena Logothetis at Deakin University

    How to think about and build AI responsibly

    How to think about and build AI responsibly

    There is a whole team at Google dedicated to designing AI best practices. They are committed to making progress in the responsible development of AI and share reliable, effective user-centered research, tools, datasets, and other resources with users. Meet one of the members of the team, Christina Greer, as she shares the in’s and out’s of working in the field of Responsible AI and how her personal experience and values make her a key player in this space!

    Resources:

    AI Principles: https://goo.gle/3VrCpJP 

    Responsible AI practices: https://goo.gle/41XVeqI

    Guest bio:
    Christina Greer is a software engineer at Google Research. A veteran of a variety of efforts across the company including ads, data processing pipelines, and Google Assistant, she joined Google Research in 2018 to focus on bias and fairness in ML. Since then, she has built both teams and software to support measuring and mitigating ML models for bias, and consults with products across Google to support building safer products that work for everyone. In her spare time, Christina is a creative writer and a mom of 2 great kids. 

     

    #AI #ML

    Women in AI APAC Interview Series episode 4 - Dr Yun Sing Koh

    Women in AI APAC Interview Series episode 4 - Dr Yun Sing Koh

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Dr Yun Sing Koh, Associate Professor in the School of Computer Science at the University of Auckland.

    Bio:

    Dr Yun Sing Koh is an associate professor in the School of Computer Science at the University of Auckland, specializing in Artificial Intelligence (AI) and Machine Learning (ML). Currently, she holds the position of Director of the Centre of Machine Learning for Social Good. Her research mainly focuses on three areas: continual learning and adaptation, anomaly detection, and data stream mining. She applies her research to interdisciplinary applications in environment and health domains. Throughout her career, she has published more than 100 peer-reviewed publications in top conference venues such as IJCAI, IEEE ICDE, IEEE ICDM. She has also published in high-quality journals, including Machine Learning Journal and Journal of Artificial Intelligence, and has received numerous awards, such as the best paper award at the Australasian Artificial Intelligence Conference (2022, 2018), best research paper award at IEEE Data Science Advance Analytics (2022), and AUT University Vice-Chancellor Emerging Researcher Award (2009).

    Between 2012-2023, she secured various grants and contracts, including being a primary investigator for prestigious grants internationally and nationally, such as the New Zealand Royal Society of Fast-Start Marsden and Office of Naval Research Grant, US. Currently, she holds or is a part of 7 major grants as a principal or associate investigator. Yun Sing is currently a fellow for the 2023 Convergence Research (CORE) Institute funded by United States National Science Foundation. Aside from her research work, she has also been active in contributing to service and leadership roles in various organizations. She held the position of General Chair of the Australasian Data Mining 2022 and General Chair of the IEEE International Conference of Data Mining 2021. She also served as Program Committee Co-Chair for the Australasian Data Mining Conference 2018, Workshop and Tutorial Chair for European Conference of Machine Learning and Principles and Practice of Knowledge Discovery 2021, Publicity Chair at 24th ACM International Conference on Information and Knowledge Management 2015, and Workshop Co-Chair, Pacific-Asia Conference on Knowledge Discovery and Data Mining Conference 2011. Since 2018, she has been a steering committee member for the Australasian Data Mining Conference and is a senior program committee member for top conferences, including AAAI, IJCAI, PAKDD, and ECML.

    Links:

    Dr Yun Sing Koh on LinkedIn

    Dr Yun Sing Koh at the University of Auckland

    Women in AI APAC Interview Series episode 3 - Aruna Kolluru

    Women in AI APAC Interview Series episode 3 - Aruna Kolluru

    Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Aruna Kolluru, the Chief Technologist, Emerging Technology at Dell Technologies, Asia Pacific & Japan.

    Bio:

    Aruna Kolluru is the Chief Technologist for Emerging Technologies at Dell Technologies, Asia Pacific and Japan. In this capacity, she collaborates with organizations throughout the Asia-Pacific region to provide them with a comprehensive understanding of the potential of emerging technologies and how to conceptualize, create, and leverage these solutions to achieve tangible business outcomes. She is also a member National AI Centre, Australia, Think Tanks, providing expertise and insights on the topics of Responsible AI, Diversity and Inclusion in AI, and AI at Scale. She is also an Expert Advisor at Responsible AI Institute and a Technical committee member of TinyML Asia Pacific.

    Based in Sydney, Aruna is at the forefront of emerging technologies and has helped numerous companies build solutions with Artificial Intelligence, Digital Twins, Quantum Computing, Federated Learning, Big Data and other innovative technologies. She has 22 years of experience in the industry, covering a broad spectrum of technologies.

    Prior to her role at Dell Technologies, Aruna worked in various positions including Cloud Platforms Technologist, Big Data Technologist, Middleware Technologist, Java Architect and Presales specialist at both IBM and Oracle. Having worked across a variety of different technology areas, she appreciates facing new challenges and has enjoyed an ongoing learning process and new challenges.

    Her technical expertise coupled with her passion for educating and exciting others about new technology is what sets Aruna apart. She is an accomplished thought leader, passionate about intersection of business and technology opens in bettering business and society at large and is particularly interested in furthering the appetite for adoption of emerging technologies to achieve this. Her work on applying machine learning to optimise operational efficiency in data back-up systems has been patented. In addition, she has received and been shortlisted for several industry awards, recognising the work she had done to support customers across the APJ region. She is a co-author of the book 60 leaders in AI.

    Aruna is a speaker at AI conferences, round tables, podcasts, and several other events. Furthermore, she is an active mentor in the IT industry, frequently speaking at a variety of programs, including Dell Technologies' Girls in Engineering and Technology, the Dell and UTS partnership for Women in Action, AI Avengers program, the Dell Technologies Code like a Girl program and guest lectures at universities.

    Aruna holds a B.Tech in Computer Science and Systems Engineering from Andhra University in India, a masters degree in Software Systems from BITS, PILANI, India, and a Graduate Diploma in Management from the Australian Graduate School of Management. Aruna is a self-learner has completed extensive training programs and certifications to developed advanced technical skills and stay abreast with the latest.

    Links:

    Aruna Kolluru on Linkedin

    National AI Centre Think Tanks

    Responsible AI Institute

    TinyML Asia Pacific

    Applying machine learning to optimise operational efficiency in data back-up systems

    60 Leaders in AI

     

    AI Australia Presents: This Week in AI vol 4

    AI Australia Presents: This Week in AI vol 4

    There have been several noteworthy developments in the world of AI this week, and in today’s episode, we share our take on the good, the bad, and the WTF. We start off by discussing exciting developments at The Human Technology Institute and how their new project, The Future of AI Regulation in Australia, aims to help laws and regulations catch up with language learning models like ChatGPT. Next, we reflect on the tragic news of the first death directly linked to a chatbot, how little the public truly understands about these technologies, and what can be done to minimize these types of risks in future. Our conversation then delves into the proposed moratorium to halt the development of more sophisticated versions of ChatGPT and key concerns that are being raised around this topic. We also spend some time unpacking the problematic source behind this petition, namely, The Future of Life Institute, before discussing what experts like Emily M. Bender And Gary Marcus have to say about the topic. To learn more about this week in AI, including our top takeaways, be sure to tune in today!

     

    Key Points From This Episode:

     

    • The Human Technology Institute’s new project: The Future of AI Regulation in Australia.

    • Reflections on the first tragic death directly linked to a chatbot.

    • Our thoughts on the petition for a moratorium to halt the development of more sophisticated versions of ChatGPT.

    • The problematic elements of the proposed moratorium, including its lack of boundaries and guardrails.

    • A reminder of the controversial origins of the Future of Life Institute and why the fact that they are funding this petition should act as a warning sign.

    • An overview of the discourse around this topic and our key takeaways.

    • Why laws and regulations have failed to keep pace with the language learning models like ChatGPT.

    • How the Human Technology Institute’s new project aims to improve regulation, ensure more equitable distribution of wealth, and minimize the potential harm of these technologies.

    Links Mentioned in Today’s Episode:


    'Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change'

    'The first known chatbot associated death'

    'Elon Musk Signs Open Letter Urging AI Labs to Pump the Brakes'

    'The Future of AI Regulation in Australia' 

    Ed Santow on LinkedIn

    University of Technology Sydney (UTS)

    Human Technology Institute (HTI)

    Emily M. Bender

    Emily M. Bender on LinkedIn

    Emily M. Bender on Medium

    'On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?'

    'Policy makers: Please don’t fall for the distractions of #AIhype'

    The Future of Life Institute

    Natalie Rouse on LinkedIn

    Dr Kobi Leins

    Dr Kobi Leins on LinkedIn

    Dr Kobi Leins on Twitter

    Eliiza

     

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io