We discuss what it’s been like tracking cloud computing trends for the past ten years. We look at public cloud. Are we leveling out on features and functions to stuff into public cloud? At some point, pricing wars with public cloud providers will end. Providers will need to differentiate fast. It will be interesting to see if AWS continues to be a leader with many followers or if each provider finds its niche. We also look at top predictions for cloud computing in 2018.
We discuss the links and channels between microservices and DevOps. What happens to architecture in a Scrum world? Many think you need to just start with Scrum and fix problems as they come. This is a primarily dev perspective, and formal architects will need to have a say early in the process to set projects up for success. It is recommended to add architecture design as a part of a sprint. We also look at SOA versus microservices and how we are now able to manage the volume of SOA.
We discuss digital edge computing and why it is important for enterprises to embrace. It even saves them money in the long run. Every new product today needs sensors to gather information and send it back to the manufacturer for feedback and safety information. This applies to everything from autonomous cars and fitness gear, to elevators and the military. The tricky part of all these sensors is that you do not want to have the data traveling back to your headquarters so that machine-learning can happen on-site in your own data warehouse. That will take too long. The insights need to be happening at the device. That means enterprises need a digital edge strategy.
Will this reach a saturation point with all this information coming in through sensors? Will IoT clients end up with too much information? One problem is that people are gathering every datapoint, but not necessarily understanding what the information means. It is important for organizations to decide what data to keep and why. A great place to find cloud-neutral, carrier-neutral best practices for moving to a digital edge model is at the Interconnection Oriented Architecture Knowledge Base.
We discuss AI in the modern enterprise and why robots stealing jobs may not quite be a problem yet. We look at what enterprises should be thinking about to grow their AI practices in the future. The truth is, AI engines rely on an enterprise already having a strong data practice under control. It relies on big data capabilities, a culture that can accept it, and data infrastructures to be in place. Most companies are not there yet. This is a case where the market hype gets ahead of itself and AI is still a few years down the road for most enterprises.
We also discuss the open-source versus cloud debate with regard to Kubernetes, containers, and more. It’s hard to say what the industry will look like in the next ten years, but clearly a whole new ecosystem of technology is emerging.
Our guest on the podcast this week is Bernard Golden, CEO at Navica.
We discuss the similarities and differences between VMs, containers, and serverless computing. A lot of organizations have optimized around VMs because they have a long history of virtualization and it’s what they understand. They’ve built a lot of processes around that. Then containers came around, and that was the hottest thing in cloud. Now serverless is here. VMs and serverless are still not highly standardized. Containers are more standardized and the orchestration is also moving towards standardization. Containers and serverless are development paradigms that the industry is starting to embrace. VM is not like a container. You can run containers within VMs but can’t run VM inside a container.
We also look at the skills gap in cloud computing today. There is a talent shortfall even in places like Amazon, Microsoft, and Google who are looking for tactical skills in devops, cloud ops, security, and more. Head hunters are befuddled because they cannot find people with the expertise and specific skills they are looking for in cloud computing and the technologies are changing so fast that the problem gets worse every day. 81% of IT leaders report to be concerned about missing out on cloud advances, and talent is a major part of staying ahead. This is a unique time in IT where the pace of change is visibly accelerated. Innovations are compressed into much shorter timeframes. The next few years will be challenging because of the skills shortage.
We discuss modern-day data analytics for enterprises. We look at new and old AWS products like AWS Glue, AWS Kinesis, and Redshift. AWS Kinesis is something Thorn Technologies leveraged to create a product that allowed them to capture location data to track user behavior data at large trade shows. They could find out where users were spending time in the physical space. At first, it was hard to process the large amount of data coming in with user information and it sometimes took 24 hours to provide insights. When they migrated it to the cloud and used AWS Kinesis streaming data and Redshift data warehouse on the back-end they were able to get near real-time insight into behavior data. Kinesis is Amazon’s real-time streaming data processing service. It is good for when you have a lot of data coming in all at once and want real-time insights.
The top data products in the cloud change the way you can solve problems. Today, the top game changers are Hadoop and spin-offs based on the Hadoop infrastructure. Hadoop gives users more power because you do not have to pre-structure your data. It opens the door to new kinds of analytics. In the past, you needed a data warehouse that was designed specifically for the data it held and with new information you had to spend months re-designing it to fit into the existing system. With Hadoop and other distributed analytics platforms you can manipulate your data on the fly and write analytics specific to your data.
In the next five years, big data and analytics will continue to evolve. Right now doing your own analytics and making your own decisions is popular. However, IoT and AI are quickly becoming a reality. In five years it will not be you making decisions, it will be machines making decisions. We will see what machines tell us about our businesses.
We discuss the difference between hybrid cloud and multi-cloud. Hybrid is anything that’s not just one cloud provider, which includes multi-cloud. Multi-cloud is a strategy where some workloads are running on one cloud, some on another. One trend we see is that big enterprises are splitting workloads more and more between at least two major public clouds for a multi-cloud strategy. According to Gartner, 70% of enterprises will be implementing a multi-cloud strategy by 2019. Some do it because they are mandated by regulators to have more than one cloud. Others are deciding they can’t put all eggs in one basket and want the flexibility to use multiple different vendors.
Companies used to look at cloud as a data center and it was all about IaaS. Now it’s becoming more and more abstract: PaaS, containers as a service, microservices, functions as a service, IoT developer kits, and edge computing are just the beginning. It seems that the more mature a company is in the cloud, the further up that stack they go. Where is all this going? Customers are looking for operational efficiencies, cost savings and more productivity and agility. In conclusion, there is no question customers will slowly continue to adopt these new technologies.
We discuss cloud security best practices for 2020, what happens when IAM meet artificial intelligence, and the security patterns and products you need to know now. Our hosts also take Q&A from the audience to answer how compliance fits into security and what the links are between DevOps and your security team.
We discuss how to translate customer pain points into product requirements when the customer is the government. David’s role at Red Hat is to make it easy for the government to use open source technologies. With open source, if you are not actively participating in the community it is hard to have your requirements heard. That is a challenge for governments and the public sector. They often have exotic requirements and require highly regulated environments. But they can benefit from open source technologies, so it is important to bridge the gap. Interestingly, governments have a reputation of being laggards when compared to other industries. But in cybersecurity, the government leads the way. They outpace the commercial cybersecurity industry with their security policies.
We also look at modern cloud careers and how to transition into the industry. It used to be important in IT to have general skills, but today knowledge of specific tools and specific tools are much more important. To survive in the technology industry you have to care about where the puck is going. The price of software used to define what developers would be able to learn. Now, with open source technology, there is no excuse not to learn because of the access. To be attractive to a future employer, it’s not just about consuming open source technologies, it’s also about living the open-source lifestyle and contributing to its communities. In the end, do not wait for your employer to train you in something you want to be doing. Spend years working on it for free so that you can learn and craft the career you want.
We discuss the many definitions floating around for what edge computing is. Some call it fog computing or MEC (mobile edge computing). It is simple. It is a perspective that as the shift goes from wireless networks and person-to-person interaction to machine-to-machine interaction, underlying architectures must change along with that shift. For things like IoT and distributed data, they require reliability and speed at the source. Increased embeddedness from IoT to healthcare, finance to automotives requires lower latency and need solutions that are mass-scaled to meet distribution requirements.
There are many use-cases for edge computing. Many often think first of Fitbits or Amazon Echos. However, it is transforming enterprises and changing entire business models based on the access to real-time data on the edge. It’s used in places like industrial IoT where sensors and sound collection takes place in factories. Managing these at the source helps solve scalability issues. Hospitals are transforming into mobile computing labs. As this technology enters the operating room, it needs to be 100% reliable and fast because lives are on the line. Last, autonomous vehicles and drones use edge computing. Decision-making is brought closer to the device, reducing steps of communication that can result in errors or latency and could cause accidents. Clearly, edge computing is the future in many budding industries.
We discuss multi-cloud strategies and how this affects company go-to-market and business strategies. That is what THINKstrategies specializes in: the integration of cloud computing with business. We are living in an on-demand world and customers want multiple alternatives at their disposal. Lucky for them, they now have many choices. There is SaaS competition, but even more prominent is the infrastructure competition. Now that companies like Google and Microsoft and even IBM are entering the mix, more competition allows enterprises to have the privilege to select their ideal partner that fits their needs. However, with all this choice comes a need for the enterprises to know what they’re doing and have the skills to orchestrate everything. Companies with more limited skills need to be careful. The customized world can complicate things when enterprises do not have the expertise to build on.
We also look at IoT and how that can be a game-changer in all kinds of industries. At this point, anything can become internet-connected. This also comes with new opportunities to monetize information and engage customers in new ways. That is what the focus of a cloud strategy should always point back to: what does it mean for the customer.
We discuss how Kubernetes has won the war as a leader in orchestration. However, it is still not easy to use or maintain. We explore what organizations need to consider to build operational efficiencies around the technology. Kubernetes, Docker, and containers are very different from something like VMware and Amazon in terms of adoption. When new technology comes into an organization, they usually would quickly become an IT-led project. With containers, they are more like Puppet, Chef, and Ansible. They pop up in clusters around the organization, started directly by DevOps teams, developers, and users. A lot of times they already exist scattered around an organization and need a framework or logic to manage it all. That is what Rancher does, they act as a services platform to help manage many clusters.
We talk about the key considerations organizations need to think about when trying to deploy Kubernetes across multiple clusters across multi-cloud. Some key points are:
What many don’t realize is that often organizations are running Kubernetes in conjunction with legacy technologies. If you keep up with cloud news, it can sometimes seem like everyone is using the latest technology. In reality, many organizations still use legacy technology like VMs and only make small incremental changes. The two biggest place Rancher runs containers is on VMware and on Amazon. In summary, Kubernetes may prolong the life of legacy technologies like VMware.
We discuss the state of OpenStack and where it will go in the next few years. Many have become pessimistic over time. 16% of deployments OpenStack deployments are in the telecommunications carrier segment and it will continue to do well in that category. OpenStack is willing to put up with complexity that others are not. However, growth is not where it should be. People don’t want all that complexity and they are walking away from it and gravitating to Kubernetes. Many organizations have parallel strategies now of Kubernetes and OpenStack. With standards, if they don’t catch on in the first few years they often die. There has to be a certain amount of enthusiasm to build a groundswell. With OpenStack, there was a groundswell but they did not have enough authoritative leadership to help make decisions good for the code base and the users.
Next, we talk about the new private cloud product from Microsoft, Azure Stack. Microsoft is one of the few public cloud providers that has started providing a private cloud right out of the gate. As a cloud player, Microsoft has gained a lot of market share in recent years and is on the way up. With this product, they found a need and are filling a niche with this new offering. Now organizations can get Azure on-premise to work with existing Microsoft infrastructures and add Azure public cloud. No one provider has ever offered this end-to-end hybrid cloud. In the end, this may allow Microsoft to make even more gains on Amazon going forward.
We discuss how Jonathan built Emerge Interactive from a technology advisory services company in 1998 and evolved it to a full service digital experience company. They are focused on how to make the most use of technology for clients. To this day, they offer advisory services to come in and help enterprises develop a digital roadmap and product plan and determine how they’ll invest in that over the next 5-10 years. They also do a lot of user experience work. They help clients develop a solution and either implement it or hand it off to an internal team to build. Building the company and launching any career in technology takes a lot of hard work and long hours. At the beginning, it takes doing a lot of work for free to break in, from speaking gigs to advising companies. It’s important to always keep up with trends and read the news because awareness is everything. You should be constantly learning. When you are passionate about something like technology, learning is not work and becomes part of everything you do.
How do you keep organizations on track with what they need instead of chasing trends and new buzzwords? We discuss avoiding buzzword bingo so enterprises can focus on what performance you are trying to enable with technology and what outcome enterprises are looking for with each effort. Compatibility and maintainability are also crucial to consider when looking at implementing new technologies in an enterprise. Enterprises need to peel back layers to understand what the life cycle is to this technology and what needs to be done.
We discuss why hybrid cloud and change management are still important for companies in the second decade of modern cloud. With the exception of some startups born in the cloud, many enterprises never accepted the fact that the cloud would replace the data center. Organizations feel this is not an “or” discussion, it’s an “and” discussion. They know things need to be on the public cloud, and they need their own on-premises data center as well. Therefore, enterprises sift through their workloads and data to determine what makes sense to migrate to public cloud and what makes sense to keep on private cloud. CTP has found that 30-40% of workloads are not economically viable to move to the public cloud. Organizations can build net new applications on the cloud, but they shouldn’t move everything. For this reason, hybrid cloud will continue to be a strong focus for enterprises in the future.
Businesses are now technology dependent. Every aspect of how a business operates is becoming technology-based. The technology allows a new way for enterprises to innovate and with that, a need for IT organizations to transform. Separate from the need to understand what the cloud is and how to optimize it, these IT teams must also draw on new skill sets professionally. IT organizations are struggling with how to become a customer service-oriented organization to internal line of business clients. 53% of companies are concerned about the changing roles of IT employees. There is a lot of change management that is required that has nothing to do with the technology, but more to do with how it’s delivered to the business.
Brian discusses a small bank in Ohio and how they have a business use-case for Kubernetes. Because Kubernetes was built to manage Google-sized technology, it is surprising that there is a reason to apply it to a small brick and mortar bank just starting with web and mobile. What they noticed in the switch is that because customers get paid on Fridays and are more likely to check if they got paid on their personal mobile device, as soon as they launched mobile their Friday traffic soared. For just one fifth of business days, Fridays received ten times the amount of traffic as any other day. This unexpected spiky traffic pattern was a great use-case for Kubernetes. Kubernetes was built to deal with problems like that, even at small businesses.
We also look at the state of the big three orchestration engines: Kubernetes, Mesos, and Swarm. Kubernetes and Mesos began as internal projects from larger companies. Mesos began in 2014 as a container scheduler Twitter was using to manage its own containers. They released it as open-source, so a community began to form around that. It focused on big data elements because of their applications to Twitter. To this day, Mesos is still preferred to run big data applications compared to the other two. Kubernetes began as Google’s Borg technology, used internally then released open-source. It was focused on the 80/20 type of use-cases such as batch use-cases and container use-cases. What happens when open-source solutions are released is that the community flocks to one and Kubernetes won more of the community than any solution. With a strong community, Kubernetes is better suited to work with many different types of applications. Last, Docker came out with Swarm to compete in the market. They keep things as simple as possible to get a few containers clustered together. Swarm has evolved to work mostly around Docker’s data center products. Tracking this industry over time reveals how containers have evolved to having use-cases in enterprises of all sizes.
We discuss how Microsoft Azure is catching up with AWS. It seems to become more compelling each year with new services like Azure Stack and Azure Container Service. In 2016, AWS grew at the same pace as the market, but Microsoft Azure grew much faster. Microsoft launched Azure 5-6 years after AWS, so AWS has an enormous lead. Now, Microsoft Azure is at least competitive with AWS in every area. This allows the two to compete over pricing. Enterprises do not automatically choose AWS anymore. They now research what the right cloud is for the organization. Many organizations are also embracing multi-cloud strategies. This means using feature capabilities of multiple public clouds for different pieces of the enterprise. For instance, it saves money to run Microsoft services on Azure, so that is often a feature that gets separated in cloud strategy.
Our guest on the podcast this week is Mike Bainbridge, digital technologist, speaker, and cloud expert. We discuss digital innovation and the disruptive business models that rely on it. Digital disruption comes up a lot today when thinking about the startups that challenge the way things have always been done. AirBnB and Uber are technology companies at their core, not transportation and hospitality companies. We look at AI and how it will affect jobs of the future. Last, we discuss vendor lock-in and when it can be a good idea to go all in on one vendor.
We discuss Austen’s early bet on serverless computing from the first time he saw AWS Lambda. Serverless, even in the early days, has many benefits. It is microservice-based, event-driven, requires no administration, and has a compelling “pay-per-execution” pricing model.
Serverless was launched as an application framework. The problem with serverless computing today is that if you want to build a sophisticated system on this type of service, you’re dealing with lots of independent units of deployment. One application is a combination of many Lambda functions. Dealing with this all-together, not to mention the event-driven computing, can be chaotic. Serverless offers a simple file that can define a serverless application. The framework provisions all the infrastructure for you and the app is up in seconds.
We are still in the early phases of serverless computing and the trend is still yet to be defined. It’s impressive how fast the cloud providers are moving with serverless computing and building new features around it. Adoption from enterprises has also been fast. The challenges of serverless computing are that there are a lot of changes at once for an organization to adopt it, and this often requires cultural shifts as well. Serverless computing requires a new way of thinking for enterprises, which is a challenge. But the for enterprises who embrace it, the gains are worth it.
We discuss the changing face of large enterprises when innovating with technology. The technology we see in big web companies from Facebook, Google, and Amazon is absolutely going to be used to reinvent how large enterprises function. But large enterprises do not need to transform into tech companies like Google to be successful. More likely the opposite is the case. Enterprises need to realize that they already are a great source of innovation and that with a focus on customers and on technology they can lead the way to success. It does not have to look exactly like Google for large enterprises to be innovative.
Figuring out what you want it to feel like is the hardest part for large enterprises. If you’re a traditional tire company, for instance, you know the tire industry but you don’t know what it feels like to be a technology company that moves quickly and safely. So how do you get the people inside the tire company to know what it feels like to move fast? How they can apply that to tires? Knowing how the business works is incredibly important and these enterprises know their markets better than anyone. The trick is to teach them how to use technology to enhance the business they already know.
Chef is a company built around automation. It began with infrastructure automation and has now added other products. Chef found bottlenecks at security and compliance, which led to InSpec. InSpec allows you to include compliance within code so you can continuously test and ensure you are compliant with standards. Another new Chef product is Habitat for application automation. Habitat acts as a smart supervisor who can build and release the application and manage it as well.
We discuss the founding story of CloudHealth, testing ideas to find the right problem to solve. We look at how Joe took the company from idea through finding early customers and fundraising. Joe made sure early on not to get attached to ideas, but to define key hypotheses and converge to the real opportunities through testing. As he became more confident in what he was building, he began to write more of the code for it. We look at why successful entrepreneurs need to be willing to embrace contrary opinions.
CloudHealth does cloud service management. They deliver a SaaS-based single pane of glass, single pane of governance for managing the full life-cycle of applications and infrastructure across public and private clouds. They currently have four products: Amazon, Azure, Google, and a Data Center product. Each provides integrated reporting, recommendations, and active policy management. The policy management does not just monitor changes that deviate from your internal policies, but drives active changes to your environments to keep them in compliance. It works like a control plane that sits on top of everything you use to manage different environments.
A typical management suite in the cloud consists of 10-12 different tools as well as multiple different cloud environments. CloudHealth allows you to configure them all in the platform, they collect all the information that resides in those different integrations of cloud environments, and bring it back into one console in terms of what the data means and how it interacts. CloudHealth then provides integrated reporting, integrated recommendations, and active policy recommendations. With a click of a button, you can determine what it would take to integrate different tools and what provisioning the integration requires. This makes managing the cloud much more streamlined and cost-efficient.
We discuss what serverless computing means for OpenStack private clouds. It is time to recognize that hybrid is here for a long time and we will be mixing public clouds with private clouds in the long run. We also look at Red Hat’s recent deal with AWS for OpenShift. This is another example of coopetition with AWS, which has sought out many more partnerships lately. Vendors are finding more opportunities to partner with AWS to prevent themselves from losing customers.
We discuss how sometimes you can find more impactful insights from smaller boutique research firms than the larger giants. Aragon Research is a full spectrum industry analyst research firm who provides advisory services to those who are building, buying, or investing in emerging technologies.
We take a look at the latest announcements Amazon CTO Werner Vogels made at the latest AWS Summit. We look at new SaaS contracts in the AWS Marketplace allowing smaller SaaS companies to outsource their billing to AWS.
Amazon is a company that seems to keep doing things right. They are hard to avoid as leaders in cloud computing right now. Even when they make mistakes they seem to be able to pivot them quickly into useful tools. They own somewhere around 80% of the public cloud market at the moment and it is no surprise why because they have the best technology.
We also look at Amazon CodeStar, their improved database services, and upgraded machine learning tools such as Amazon Rekognition.
Amazon Rekognition uses machine learning for image detection to automatically monitor content. This allows us to identify objectionable images automatically. This has use-cases anywhere from identifying fake news to preventing issues with advertisements on objectionable content. Amazon is using machine learning to look at images and rank them on a 9-point scale of how objectionable the image is.
Rekognition is a deep learning service, meaning it is built on several layers of neural networks, the first layer being feature recognition, the next a classification of objectionable content, and so on. As of now, Amazon has built in a standard scale, but it would be interesting if they let users choose their own parameters for what is objectionable. Giving context for content is a difficult step in the process, which would be an interesting next move for Amazon also. There needs to be an objectionable rating for users so that the system can learn individual preferences as well.
We also discuss data privacy, and how zombie cloud data can haunt you when you think it has been deleted but it still exists somewhere else. There are legal issues around this zombie data that are being exposed now even with subjects like student standardized testing. Of course, sometimes this can be a great feature when you accidentally delete something and there is a way to find it again.
We discuss how a lot of big players in tech from the past are now gone, and this trend makes us look closely at the big tech companies today whose growth is slowing, such as Oracle and IBM. They seem to have lost their disruptive edge and struggle with the new business models they compete against, but experience tells us they are definitely survivors. They’re making money around a legacy business that we still live with today, but that is diminishing with the widespread adoption of the cloud. If they do move into the cloud, which they have taken steps towards, it doesn’t necessarily make sense because they would be taking sales from their own legacy technology.
IBM is now reinventing how to work within this new type of model that Microsoft and Amazon have built. IBM is trying to play catch up on the growth of the cloud and figuring out how to make money on it. With Watson and IoT, they are doing fascinating stuff in this space which could launch them into a hybrid cloud model with various customers, but there is no question they are struggling.
We look at how AWS is looking to simplify building and deploying apps on the cloud platform with a new service, AWS CodeStar. This service makes it simpler to set up projects by using templates for web applications, web services, and others. Developers can provision projects and resources from coding to testing to deployment. It seems to be yet another service AWS is providing that Microsoft and Google don’t have, which further solidifies its leadership in cloud computing. Amazon is great at targeting clients at all levels, from large enterprises to capturing the minds and hearts of the tech geeks. With AWS CodeStar, they aim to make it easier for developers to build applications on the cloud. 2017 is the year of the cloud land grab. Each vendor is trying to get as many people onto their platforms as possible, and AWS is trying to convert anyone and everyone.
AWS CodeStar was launched in response to enterprises facing the challenges of agile development software processes. The first challenge for new software is often having a lengthy setup process before they can start coding. The ability to pull this stuff out to the cloud and get up and running quickly is a huge strategic advantage, especially considering enterprises cloud take years to set up processes like these.
When an enterprise decides to switch to agile development or DevOps, there is a huge initial infrastructure setup involved to get the ID setup, the code repository set up with right security, and getting the build and deploy systems created. By Amazon offering this in a box, giving end-to-end solutions including integration with JIRA for bugs, that will be huge for saving time and headaches for clients. It would not be a surprise to see the other cloud vendors start to imitate CodeStar now that AWS has raised the bar.
Amazon has in a way become the new IBM. They are now the 800 pound gorilla that others strive to catch up to. When you look at their total capacity, they have close to fourteen times the data capacity of the next five vendors underneath them. It will be hard to catch up with them. Other vendors often claim to have faster growth than AWS, but that is only because they are playing on smaller fields and AWS is so big there is less room to grow.
Microsoft is now offering Cloud Migration Assessment services, which walk clients through an evaluation of resources they currently use to determine what to move to the cloud and what it will cost to do that. AWS offers a similar tool, and it’s clear that both tools will be used to promote their internal products. It may be a useful tool to determine the cost of migrating to the cloud, but it’s important to remember that everybody’s needs for the cloud are different, so enterprises need to focus on what their particular needs and requirements are.
No technology negates the need for good design and planning, and cloud is no exception. The pricing of moving to the cloud is dependent on how your particular IT is structured. Cross-vendor pricing must be considered also. Your particular configuration will depend on your requirements and what vendors you are piecing together. These tools often underestimate the cost of what the migration will be, they would rarely overestimate the cost.