OpenStack, the massive open source project that provides large businesses with the software tools to run their data center infrastructure, is now almost eight years old. While it had its ups and downs, hundreds of enterprises now use it to run their private clouds and there are even over two dozen public clouds that use the project’s tools. Users now include the likes of AT&T, Walmart, eBay, China Railway, GE Healthcare, SAP, Tencent and the Insurance Australia Group, to name just a few.
“One of the things that’s been happening is that we’re seven years in and the need for turning every type of infrastructure into programmable infrastructure has been proven out. “It’s no longer a debate,” OpenStack COO Mark Collier told me ahead of the projects semi-annual developer conference this week. OpenStack’s own surveys show that the project’s early adopters, who previously only tested it for their clouds, continue to move their production workflows to the platform, too. “We passed the hype phase,” Collier noted.
Indeed, in a recent survey of IT professionals, Linux and OpenStack service provider SUSE found that 23 percent of organizations now use the OpenStack in production. That’s up from 15 percent in 2015.
So with OpenStack now being the de facto standard for running private clouds and most of the core projects being stable, the community has recently tackled new use cases that range from container support to edge computing and running machine learning workflows on OpenStack Clouds. But to do all of this, users have to take the parts of OpenStack and mix and match them with other — typically also open source — projects. And that’s still very difficult because these companies then often have to figure out their own solutions for integrating all of these different tools. “We we find is that when we talk to users, they have figured it out,” said Collier. “But in the process — and it’s too hard — they had to write their own software to make it all work.”
So going forward, the OpenStack community and the Foundation behind it plan to focus quite a bit on exactly this problem. “We will redeploy our resources — and we spend $20 million a year — to how we can improve the integration and operational tools that make each open source project work better when you put them together,” said Collier. “That’s a pretty big evolution for us,” he added but noted that this is also a natural step for the project. The community, after all, already spent some efforts on testing some core integrations with tools like Docker or Kubernetes for container support. “The last mile of open source success is the gaps in between project,” said Collier.
“We, as a foundation, have to support all of the things that our community is doing,” added OpenStack Foundation Executive Director Jonathan Bryce. “That’s a broad open infrastructure mission and not just a subset of cloud virtualization technologies.”
In practice, this means new events that will focus on specific use cases, closer cooperation with other open source communities and more work on getting these different communities to work with each other. On the technical side, that also means going beyond proof of concepts and making sure that there are testing frameworks in place to ensure that different tools can work with each other.
Ideally, all of this will improve OpenStack, but I couldn’t help but think that Collier and Bryce were also thinking about the wider open source ecosystem and how projects like this can help others as well. “All of these open source projects — they operate in silos because they operate around distinct problems they are trying to solve,” said Collier. No doubt, there is a lot of truth to that. If the OpenStack community can help puncture some holes into these silos and get more projects to talk to each other, then that should be a win-win for everybody involved.
Both Collier and Bryce stressed that this doesn’t mean the project is taking its focus away from its core mission. There is still plenty of work happening in the core OpenStack projects like the Ironic service for running and managing bare-metal workloads (something that’s becoming increasingly important to businesses that want to run machine learning workloads on their OpenStack clouds), or the Nova compute service that sits at the core of the project, which can now more easily scale horizontally thanks to a complete overhaul of the scheduler that sits behind the service.
Featured Image: christopher_brown/Flickr UNDER A CC BY 2.0 LICENSE Readmore