Logo

    kubecon

    Explore " kubecon" with insightful episodes like "Internal Developer Platforms: Helping Teams Limit Scope", "How the Kubernetes Gateway API Beats Network Ingress", "The New Age of Virtualization", "Kubernetes Goes Mainstream? With Calico, Yes" and "Hello, GitOps -- Boeing's Open Source Push" from podcasts like ""The New Stack Podcast", "The New Stack Podcast", "The New Stack Podcast", "The New Stack Podcast" and "The New Stack Podcast"" and more!

    Episodes (55)

    Internal Developer Platforms: Helping Teams Limit Scope

    Internal Developer Platforms: Helping Teams Limit Scope

    In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America. 

    He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms. 

    Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects.

     

    Learn more from The New Stack about Tanzu and internal developer platforms:

    VMware Unveils a Pile of New Data Services for Its Cloud VMware 

    VMware Expands Tanzu into a Full Platform Engineering Environment 

    VMware Targets the Platform Engineer

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

     

    How the Kubernetes Gateway API Beats Network Ingress

    How the Kubernetes Gateway API Beats Network Ingress

    In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress. 

    The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.

    While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.

    Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:

    Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future 

    API Gateway, Ingress Controller or Service Mesh: When to Use What and Why 

    Ingress Controllers or the Kubernetes Gateway API? Which is Right for You? 

     

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

     

     

     

     

    The New Age of Virtualization

    The New Age of Virtualization

    Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.

    The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.

    Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.

    Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:

    The Future of VMs on Kubernetes: Building on KubeVirt

    A Platform for Kubernetes

    Scaling Open Source Community by Getting Closer to Users

    Kubernetes Goes Mainstream? With Calico, Yes

    Kubernetes Goes Mainstream? With Calico, Yes

    The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability.

    The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile.

    Learn more from The New Stack about Tigera and Cloud Native Security:

    Cloud Native Network Security: Who’s Responsible?

    Turbocharging Host Workloads with Calico eBPF and XDP

    3 Observability Best Practices for Cloud Native App Security

    Hello, GitOps -- Boeing's Open Source Push

    Hello, GitOps -- Boeing's Open Source Push

    Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.

    The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.

    Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.

    Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.

    Learn more from The New Stack about Boeing and CNCF open source projects:

    How Boeing Uses Cloud Native

    How Open Source Has Turned the Tables on Enterprise Software

    Scaling Open Source Community by Getting Closer to Users

    Mercedes-Benz: 4 Reasons to Sponsor Open Source Projects

    How AWS Supports Open Source Work in the Kubernetes Universe

    How AWS Supports Open Source Work in the Kubernetes Universe

    At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.

    AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.

    The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.

    The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.

    Learn more from The New Stack about AWS and Open Source:

    Powertools for AWS Lambda Grows with Help of Volunteers

    Amazon Web Services Open Sources a KVM-Based Fuzzing Framework

    AWS: Why We Support Sustainable Open Source 

    How to Know If You’re Building the Right Internal Tools

    How to Know If You’re Building the Right Internal Tools

    In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.

    Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.

    Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.

    Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:

    Cloud Native Observability: Fighting Rising Costs, Incidents

    A Guide to Measuring Developer Productivity 

    4 Key Observability Best Practices

    What Will Be Hot at KubeCon in Chicago?

    What Will Be Hot at KubeCon in Chicago?

    KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.

    Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.

    See what came out of the last KubeCon event in Amsterdam earlier this year:

    AI Talk at KubeCon

    Don’t Force Containers and Disrupt Workflows

    A Boring Kubernetes Release

    AI Talk at KubeCon

    AI Talk at KubeCon

    What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam. 

    Dolezal said AI did come up in conversation.

    "I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.

    At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.

    Read more about AI and Kubernetes on The New Stack:

    3 Important AI/ML Tools You Can Deploy on Kubernetes

    Flyte: An Open Source Orchestrator for ML/AI Workflows

    Overcoming the Kubernetes Skills Gap with ChatGPT Assistance

    Kubernetes is niet meer spannend (en dat is een goede zaak)

    Kubernetes is niet meer spannend (en dat is een goede zaak)

    Kubernetes wordt meer en meer mainstream, ook al is het nog altijd complex. Dat is een van de conclusies na KubeCon + CloudNativeCon, dat recent door de Cloud Native Computing Foundation in Amsterdam werd georganiseerd.

    We waren aanwezig in de RAI in Amsterdam om ons weer eens onder te dompelen in de wereld van Kubernetes en cloud-native development. Het was dit jaar drukker dan ooit. We hebben vernomen dat er zo'n 12.000 mensen op afkwamen, maar ook dat de tickets al geruime tijd voor het event uitverkocht waren. Er was dus nog meer interesse dan dat. Naast het officiële gedeelte in de RAI, zijn er verder ook nog allerlei zogeheten off-sites, al dan niet onder de vlag van CNCF. Deze vinden voor een deel plaats voordat het event formeel begint. De topics waar het over gaat tijdens KubeCon + CloudNativeCon leven dus zonder meer, dat is wel duidelijk.

    Het gaat steeds minder over Kubernetes

    Een van de constateringen van KubeCon + CloudNativeCon is dat Kubernetes als geheel weer een stukje volwassener is geworden. Vorig jaar hebben we hier al eens een aflevering van Techzine Talks over opgenomen. Deze trend heeft zich in de afgelopen jaar doorgezet. Dat houdt in dat de sweetspot voor het aantal Kubernetes-clusters per organisaties omhoog gaat, maar dat ook de enorme uitwassen in aantallen clusters steeds minder worden. Onderzoek van VMware wees dit uit. Bevindingen van Red Hat naar het moderniseren van applicaties leek deze uitkomst ook te onderschrijven. Het rehosten (lift-and-shift) wordt minder populair ten faveure van replatformen en refactoring. Deze beweging zou je niet zien als er niet meer kennis en kunde komt rondom de mogelijkheden van Kubernetes en cloud-native in het algemeen.

    Al met al wordt Kubernetes steeds minder spannend en interessant, geeft CNCF CTO Chris Aniszczyk in gesprek met ons dan ook aan. Hij ziet eenzelfde beweging als we bij Linux hebben gezien. Daaromheen werden voorheen ook enorme conferenties georganiseerd, nu is dat veel minder. Dit terwijl Linux in de tussenliggende tijd alleen maar belangrijker is geworden. Dat zal voor Kubernetes ook gaan gebeuren.

    Tijdens KubeCon hebben we naast Kubernetes (waar het dus steeds minder over gaat) enkele trends gespot, die we in deze aflevering van Techzine Talks bespreken. Onder andere OTEL (Open Telemetry), Wasm (WebAssembly), maar vooral ook IDP (Internal Development Platforms) en de rol die Backstage hierin speelt komen voorbij. En uiteraard hebben we het over het skills gap.

    KubeCon + CloudNativeCon EU 2023: Hello Amsterdam

    KubeCon + CloudNativeCon EU 2023: Hello Amsterdam

    Hoi Europe and beyond!

    Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.

    In this latest edition of The New Stack podcast, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz

    Hazelcast and the Benefits of Real Time Data

    Hazelcast and the Benefits of Real Time Data

    In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit.

     

    "'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.

     

    Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.

     

    But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you."

     

    What is a Real Time Data Processing Engine?

     

    A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach.

     

    With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested.  "As the data comes in, you can actually take action based on context of the data," he argued.

     

    Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible.

     

    In this interview, we also debated the merits of Kafka, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.

     

     

     

     

     

     

     

     

    Open Source Underpins A Home Furnishings Provider’s Global Ambitions

    Open Source Underpins A Home Furnishings Provider’s Global Ambitions

    Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022.

     

    “It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done.

     

    “We have technologists throughout the world, in North America and throughout Europe as well,”  Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.”

     

    Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses.

     

    About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,”  Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. 

     

    The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,”  Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”

    How Boeing Uses Cloud Native

    How Boeing Uses Cloud Native

    In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies.

     

    While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community.

     

    "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology."

     

    Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said.

     

    Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally."

     

    It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres.

     

    " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."

    Cloud Native Computing

    While Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform.

     

    "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.

    Case Study: How Dell Technologies Is Building a DevRel Team

    Case Study: How Dell Technologies Is Building a DevRel Team

    DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies.

     

    That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell.

     

    With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”

     

    “And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”

     

    In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business.

     

    This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.

     

    Recruiting Influencers

     

    Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984.

     

    “There's two ways of building a DevOps team,” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road.

     

    “But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.”

     

    In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, developer.dell.com, which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace.

     

    In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff.

     

    “But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.”

     

    Facing the Limits of ‘Shift Left’

     

    The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world.

     

    “The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said.  “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?”

     

    The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days.

     

    “The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said.

     

    “That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.”

     

    But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.”

     

    Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift.

    OpenTelemetry Properly Explained and Demoed

    OpenTelemetry Properly Explained and Demoed

    OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? 

     

    Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. 

     

    At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. 

     

    DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.”

     

    Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said.

     

    Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say.

     

    OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. 

     

    “The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’”

    CN&C Live from KubeCon 2022 in Detroit

    CN&C Live from KubeCon 2022 in Detroit

    Cloud Native & Coffee presents biweekly conversations with cloud native infrastructure and application experts from across the community and around the world.

    Meet six open source innovators whose unique projects are solving problems for implementers and end users – making Kubernetes more cost-effective, safer, more secure, easier to operate, and improving the experience of application developers on the platform.

    A special thanks to our lovely & fascinating interview subjects!

    The interviewees featured this week are as follows:

    1. Madhuri Yechuri, Founder & CEO, Elotl
    2. Pieter van Noordennen, Head of Growth, Slim.AI
    3. Alin Dobra, Co-Founder & CEO, Bunnyshell
    4. Omri Gazitt, Co-Founder & CEO, Aserto
    5. Shahar Binyamin, Co-Founder & CEO, Inigo.io
    6. Michael Schmid, Co-Founder & Head of Technology, amazee.io

    As mentioned by Eric, this (RCN #49) will be the final episode of Radio Cloud Native (in it's current format).

    Please keep the podcast in your feed and keep an eye out for fresh content. Sometime early in this new year, the team at Mirantis is planning to revamp Radio Cloud Native (possibly under a new name, but accessible from this same RSS feed).

    Thank you to all of our listeners that kept Radio Cloud Native going over the past two-or-so years. And a special thanks as well as good luck to long-time host Eric Gregory as he pursues future endeavors outside of Mirantis. Stay tuned for more podcasts from Mirantis in 2024!

    The Latest Milestones on WebAssembly's Road to Maturity

    The Latest Milestones on WebAssembly's Road to Maturity

    DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.

     

    Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021.

     

    “A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast.

     

    But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day —  Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.”

     

    In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).

     

    Wasm and Docker, Java, Python

     

    WebAssembly – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like C, C++, and Rust to the web development area.

     

    At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that it has added Wasm as a runtime that developers can target; that feature is now in beta.

     

    Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.”

     

    “With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.”

     

    Goldenring pointed to the findings of the CNCF’s “mini survey” of WebAssembly users,  released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly.

     

    “Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.”

     

    Worlds and Warg Craft

     

    The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser. Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes co-presented at Wasm Day on the work being done on WebAssembly package management, including Warg.)

     

    A world file, Hayes said, is a way of defining your environment.  "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.”

     

    And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the W3C's WebAssembly Working Group.

     

    Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.”

     

    Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers.

    How Do We Protect the Software Supply Chain?

    How Do We Protect the Software Supply Chain?

    DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.

     

    “And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”

     

    Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.

     

    “We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”

     

    Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.

     

    Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.

     

    This podcast episode was sponsored by AWS.

    ‘Trust, but Verify’

    For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.

     

    A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”

     

    That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.

     

    That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”

     

    Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.

     

    More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”

    GitBOM and the ‘Signal-to-Noise Ratio’

    As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety.

     

    “Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”

     

    Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”

     

    While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”

     

    One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January.

     

    GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.

     

    “We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”

     

    In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel,  like need to do a better job of moving teams in the right direction as far as action,” he said.

     

    At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers

     

    “And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”

     

    Listen to whole podcast to learn more about the state of software supply chain security.