Info

THE DOPPLER

The Doppler podcasts cover all things cloud while focusing on how to prepare the traditional enterprise to look beyond conventional computing. We talk about what’s new, what’s working, and have expert guests who provide the advice you need to be successful in the cloud. Read by over 10,000 IT leaders, The Doppler weekly email reports and quarterly print editions answer your critical cloud questions and keep you informed on the cloud trends that matter most. For more information please visit cloudtp.com/doppler
RSS Feed
THE DOPPLER
2019
August
July
June
May
April
March


2018
November
October
July
May
March
February


2017
November
October
September
August
July
June
May
April
March
February
January


2016
December
November
October
September
August
July
June
May
April
March
February
January


Categories

All Episodes
Archives
Categories
Now displaying: 2017
Nov 17, 2017

We discuss why enterprises are adopting public and hybrid cloud strategies, the decline of private cloud and PaaS, and the announcements we expect to hear at re:Invent 2017.

With increased cost savings and agility coming from public and hybrid cloud strategies, more enterprises are ditching private cloud and PaaS solutions.

Nov 3, 2017

Our guest on the podcast this week is Lori MacVittie, Principal Tech Evangelist at F5 Networks.

We discuss what it’s been like tracking cloud computing trends for the past ten years. We look at public cloud. Are we leveling out on features and functions to stuff into public cloud? At some point, pricing wars with public cloud providers will end. Providers will need to differentiate fast. It will be interesting to see if AWS continues to be a leader with many followers or if each provider finds its niche. We also look at top predictions for cloud computing in 2018.

Oct 27, 2017

Our guest on the podcast this week is JP Morgenthal, CTO Application Services at DXC Technology.

We discuss the links and channels between microservices and DevOps. What happens to architecture in a Scrum world? Many think you need to just start with Scrum and fix problems as they come. This is a primarily dev perspective, and formal architects will need to have a say early in the process to set projects up for success. It is recommended to add architecture design as a part of a sprint. We also look at SOA versus microservices and how we are now able to manage the volume of SOA.

Oct 26, 2017

We discuss why Vanguard went to the public cloud, the value of DevOps and best practices for IT leaders who are just getting started on their cloud initiative.

Oct 19, 2017
 
Oct 13, 2017

Our guest on the podcast this week is James Staten, Global Head of Market Development at Equinix.

We discuss digital edge computing and why it is important for enterprises to embrace. It even saves them money in the long run. Every new product today needs sensors to gather information and send it back to the manufacturer for feedback and safety information. This applies to everything from autonomous cars and fitness gear, to elevators and the military. The tricky part of all these sensors is that you do not want to have the data traveling back to your headquarters so that machine-learning can happen on-site in your own data warehouse. That will take too long. The insights need to be happening at the device. That means enterprises need a digital edge strategy.

Will this reach a saturation point with all this information coming in through sensors? Will IoT clients end up with too much information? One problem is that people are gathering every datapoint, but not necessarily understanding what the information means. It is important for organizations to decide what data to keep and why. A great place to find cloud-neutral, carrier-neutral best practices for moving to a digital edge model is at the Interconnection Oriented Architecture Knowledge Base

Oct 6, 2017

Our guest on the podcast this week is Derrick Harris, Founder at ARCHITECHT.

We discuss AI in the modern enterprise and why robots stealing jobs may not quite be a problem yet. We look at what enterprises should be thinking about to grow their AI practices in the future. The truth is, AI engines rely on an enterprise already having a strong data practice under control. It relies on big data capabilities, a culture that can accept it, and data infrastructures to be in place. Most companies are not there yet. This is a case where the market hype gets ahead of itself and AI is still a few years down the road for most enterprises.

We also discuss the open-source versus cloud debate with regard to Kubernetes, containers, and more.  It’s hard to say what the industry will look like in the next ten years, but clearly a whole new ecosystem of technology is emerging.

Oct 1, 2017

Our guest on the podcast this week is Bernard Golden, CEO at Navica.

We discuss the similarities and differences between VMs, containers, and serverless computing.  A lot of organizations have optimized around VMs because they have a long history of virtualization and it’s what they understand. They’ve built a lot of processes around that. Then containers came around, and that was the hottest thing in cloud. Now serverless is here. VMs and serverless are still not highly standardized. Containers are more standardized and the orchestration is also moving towards standardization. Containers and serverless are development paradigms that the industry is starting to embrace. VM is not like a container. You can run containers within VMs but can’t run VM inside a container.

We also look at the skills gap in cloud computing today. There is a talent shortfall even in places like Amazon, Microsoft, and Google who are looking for tactical skills in devops, cloud ops, security, and more. Head hunters are befuddled because they cannot find people with the expertise and specific skills they are looking for in cloud computing and the technologies are changing so fast that the problem gets worse every day. 81% of IT leaders report to be concerned about missing out on cloud advances, and talent is a major part of staying ahead. This is a unique time in IT where the pace of change is visibly accelerated. Innovations are compressed into much shorter timeframes. The next few years will be challenging because of the skills shortage.

Sep 14, 2017

Our guest on the podcast this week is Andy Thurai, Chief Strategist, IBM Watson Cloud Platform - CTO Office, IBM Cloud at IBM.

We discuss dark data and how IBM Watson can understand it for decision-making. Normally when people refer to data they think of structured data in rows of numbers. But in last few years, the data has changed into images, videos, medical scans, sensor scans, audio, telematics. This dark data is unstructured, which makes it difficult to analyze with existing systems. That makes it hard to use it to make decisions. Currently 80% of the world’s data collection is dark data and that’s expected to grow to 90% by 2020. So if you’re running a data insights-based company, you need to analyze this unstructured data. IBM Watson can read the unstructured data, and it can make meaning out of it. Imagine a home security camera that recognizes the UPS uniform and lets you know when your package arrives. This is an example of machines assisting humankind, not competing with it.

Facebook, Uber, and Google are fast-moving companies that are becoming the new definition of “Enterprises”. Traditional enterprises are trying to keep up with their speed. Looking at the innovative businesses, one common thread is that they all have a huge investment in cloud and in AI. It’s not just about machine learning and deep learning. Ultimately if you can have machines understand the user in their native tongue, when they need it most, and understand the urgency, the needs, the emotions at that time it will help you make critical decisions based on human context. That is a powerful differentiation for your business in the marketplace for any enterprise.

Aug 31, 2017

Our guest on the podcast this week is Jeff Thorn, CEO at Thorn Technologies.

We discuss modern-day data analytics for enterprises. We look at new and old AWS products like AWS Glue, AWS Kinesis, and Redshift. AWS Kinesis is something Thorn Technologies leveraged to create a product that allowed them to capture location data to track user behavior data at large trade shows. They could find out where users were spending time in the physical space. At first, it was hard to process the large amount of data coming in with user information and it sometimes took 24 hours to provide insights. When they migrated it to the cloud and used AWS Kinesis streaming data and Redshift data warehouse on the back-end they were able to get near real-time insight into behavior data. Kinesis is Amazon’s real-time streaming data processing service. It is good for when you have a lot of data coming in all at once and want real-time insights.

Where is big data now and where is it heading?

The top data products in the cloud change the way you can solve problems. Today, the top game changers are Hadoop and spin-offs based on the Hadoop infrastructure. Hadoop gives users more power because you do not have to pre-structure your data. It opens the door to new kinds of analytics. In the past, you needed a data warehouse that was designed specifically for the data it held and with new information you had to spend months re-designing it to fit into the existing system. With Hadoop and other distributed analytics platforms you can manipulate your data on the fly and write analytics specific to your data.

In the next five years, big data and analytics will continue to evolve. Right now doing your own analytics and making your own decisions is popular. However, IoT and AI are quickly becoming a reality. In five years it will not be you making decisions, it will be machines making decisions. We will see what machines tell us about our businesses.

Aug 24, 2017

Our guest on the podcast this week is Issy Ben-Shaul, Co-Founder & CEO at Velostrata.

We discuss the difference between hybrid cloud and multi-cloud. Hybrid is anything that’s not just one cloud provider, which includes multi-cloud. Multi-cloud is a strategy where some workloads are running on one cloud, some on another. One trend we see is that big enterprises are splitting workloads more and more between at least two major public clouds for a multi-cloud strategy. According to Gartner, 70% of enterprises will be implementing a multi-cloud strategy by 2019. Some do it because they are mandated by regulators to have more than one cloud. Others are deciding they can’t put all eggs in one basket and want the flexibility to use multiple different vendors.

Companies used to look at cloud as a data center and it was all about IaaS. Now it’s becoming more and more abstract: PaaS, containers as a service, microservices, functions as a service, IoT developer kits, and edge computing are just the beginning. It seems that the more mature a company is in the cloud, the further up that stack they go. Where is all this going? Customers are looking for operational efficiencies, cost savings and more productivity and agility. In conclusion, there is no question customers will slowly continue to adopt these new technologies.

Aug 24, 2017

The Doppler podcast went live this week. CTP cloud evangelists, David Linthicum and Mike Kavis discuss what happens when machine learning meets identity and encryption.

We discuss cloud security best practices for 2020, what happens when IAM meet artificial intelligence, and the security patterns and products you need to know now. Our hosts also take Q&A from the audience to answer how compliance fits into security and what the links are between DevOps and your security team.

Aug 18, 2017

Our guest on the podcast this week is David Egts, Chief Technologist, North America Public Sector at Red Hat.

We discuss how to translate customer pain points into product requirements when the customer is the government. David’s role at Red Hat is to make it easy for the government to use open source technologies. With open source, if you are not actively participating in the community it is hard to have your requirements heard. That is a challenge for governments and the public sector. They often have exotic requirements and require highly regulated environments. But they can benefit from open source technologies, so it is important to bridge the gap. Interestingly, governments have a reputation of being laggards when compared to other industries. But in cybersecurity, the government leads the way. They outpace the commercial cybersecurity industry with their security policies.

We also look at modern cloud careers and how to transition into the industry. It used to be important in IT to have general skills, but today knowledge of specific tools and specific tools are much more important. To survive in the technology industry you have to care about where the puck is going. The price of software used to define what developers would be able to learn. Now, with open source technology, there is no excuse not to learn because of the access. To be attractive to a future employer, it’s not just about consuming open source technologies, it’s also about living the open-source lifestyle and contributing to its communities. In the end, do not wait for your employer to train you in something you want to be doing. Spend years working on it for free so that you can learn and craft the career you want.

Aug 16, 2017

Our guest on the podcast this week is Don Duet, President and COO at Vapor IO.

We discuss the many definitions floating around for what edge computing is. Some call it fog computing or MEC (mobile edge computing). It is simple. It is a perspective that as the shift goes from wireless networks and person-to-person interaction to machine-to-machine interaction, underlying architectures must change along with that shift. For things like IoT and distributed data, they require reliability and speed at the source. Increased embeddedness from IoT to healthcare, finance to automotives requires lower latency and need solutions that are mass-scaled to meet distribution requirements.

There are many use-cases for edge computing. Many often think first of Fitbits or Amazon Echos. However, it is transforming enterprises and changing entire business models based on the access to real-time data on the edge. It’s used in places like industrial IoT where sensors and sound collection takes place in factories. Managing these at the source helps solve scalability issues. Hospitals are transforming into mobile computing labs. As this technology enters the operating room, it needs to be 100% reliable and fast because lives are on the line. Last, autonomous vehicles and drones use edge computing. Decision-making is brought closer to the device, reducing steps of communication that can result in errors or latency and could cause accidents. Clearly, edge computing is the future in many budding industries.

Aug 10, 2017

Our guest on the podcast this week is Jeff Kaplan, Managing Director at THINKstrategies.

We discuss multi-cloud strategies and how this affects company go-to-market and business strategies. That is what THINKstrategies specializes in: the integration of cloud computing with business. We are living in an on-demand world and customers want multiple alternatives at their disposal. Lucky for them, they now have many choices. There is SaaS competition, but even more prominent is the infrastructure competition. Now that companies like Google and Microsoft and even IBM are entering the mix, more competition allows enterprises to have the privilege to select their ideal partner that fits their needs. However, with all this choice comes a need for the enterprises to know what they’re doing and have the skills to orchestrate everything. Companies with more limited skills need to be careful. The customized world can complicate things when enterprises do not have the expertise to build on.

We also look at IoT and how that can be a game-changer in all kinds of industries. At this point, anything can become internet-connected. This also comes with new opportunities to monetize information and engage customers in new ways. That is what the focus of a cloud strategy should always point back to: what does it mean for the customer.

Aug 7, 2017

Our guest on the podcast this week is Shannon Williams, Co-Founder and VP, Sales at Rancher Labs.

We discuss how Kubernetes has won the war as a leader in orchestration. However, it is still not easy to use or maintain. We explore what organizations need to consider to build operational efficiencies around the technology. Kubernetes, Docker, and containers are very different from something like VMware and Amazon in terms of adoption. When new technology comes into an organization, they usually would quickly become an IT-led project. With containers, they are more like Puppet, Chef, and Ansible. They pop up in clusters around the organization, started directly by DevOps teams, developers, and users.  A lot of times they already exist scattered around an organization and need a framework or logic to manage it all. That is what Rancher does, they act as a services platform to help manage many clusters.

Key Considerations for Kubernetes

We talk about the key considerations organizations need to think about when trying to deploy Kubernetes across multiple clusters across multi-cloud. Some key points are:

  • Understanding expectations of scale
  • Where will you be running it
  • How to make it highly available
  • Understanding your organization’s tolerance for failure

What many don’t realize is that often organizations are running Kubernetes in conjunction with legacy technologies. If you keep up with cloud news, it can sometimes seem like everyone is using the latest technology. In reality, many organizations still use legacy technology like VMs and only make small incremental changes. The two biggest place Rancher runs containers is on VMware and on Amazon. In summary, Kubernetes may prolong the life of legacy technologies like VMware.

Aug 3, 2017

Our guest on the podcast this week is Randy Bias, VP, Technology and Strategy, Cloud Software at Juniper Networks.

We discuss the state of OpenStack and where it will go in the next few years. Many have become pessimistic over time. 16% of deployments OpenStack deployments are in the telecommunications carrier segment and it will continue to do well in that category. OpenStack is willing to put up with complexity that others are not. However, growth is not where it should be. People don’t want all that complexity and they are walking away from it and gravitating to Kubernetes. Many organizations have parallel strategies now of Kubernetes and OpenStack. With standards, if they don’t catch on in the first few years they often die. There has to be a certain amount of enthusiasm to build a groundswell. With OpenStack, there was a groundswell but they did not have enough authoritative leadership to help make decisions good for the code base and the users.

Next, we talk about the new private cloud product from Microsoft, Azure Stack. Microsoft is one of the few public cloud providers that has started providing a private cloud right out of the gate. As a cloud player, Microsoft has gained a lot of market share in recent years and is on the way up. With this product, they found a need and are filling a niche with this new offering. Now organizations can get Azure on-premise to work with existing Microsoft infrastructures and add Azure public cloud. No one provider has ever offered this end-to-end hybrid cloud. In the end, this may allow Microsoft to make even more gains on Amazon going forward.

Jul 27, 2017
Our guest on the podcast this week is Rob Kaloustian, VP, WW Technical Services at Commvault.
 

We discuss the different capabilities of Commvault, from cloud data management to cloud automation, Ransomare protection, and beyond. Essentially, they make data less costly to move to the cloud. They are different from competitors based on their scaleability, indexing, deep metadata search, deep content search, and high security capabilities.

Ultimately, large enterprises want to diversify where their data is stored. This physical diversity provides better protection for them by giving them contingency plans if something goes wrong. At first, large enterprises are concerned with having their data in the public cloud at all. Once they warm up to public cloud, they look to build multiple backups of their data in different places so that if it fails in one place it is still protected and functioning somewhere else.

Jul 20, 2017

Our guest on the podcast this week is Jonathon Hensley, CEO at Emerge Interactive.

We discuss how Jonathan built Emerge Interactive from a technology advisory services company in 1998 and evolved it to a full service digital experience company. They are focused on how to make the most use of technology for clients. To this day, they offer advisory services to come in and help enterprises develop a digital roadmap and product plan and determine how they’ll invest in that over the next 5-10 years. They also do a lot of user experience work. They help clients develop a solution and either implement it or hand it off to an internal team to build. Building the company and launching any career in technology takes a lot of hard work and long hours. At the beginning, it takes doing a lot of work for free to break in, from speaking gigs to advising companies. It’s important to always keep up with trends and read the news because awareness is everything. You should be constantly learning. When you are passionate about something like technology, learning is not work and becomes part of everything you do.

How do you keep organizations on track with what they need instead of chasing trends and new buzzwords? We discuss avoiding buzzword bingo so enterprises can focus on what performance you are trying to enable with technology and what outcome enterprises are looking for with each effort. Compatibility and maintainability are also crucial to consider when looking at implementing new technologies in an enterprise. Enterprises need to peel back layers to understand what the life cycle is to this technology and what needs to be done.

Jul 14, 2017

Our guests on the podcast this week are Lynda Stadtmueller, Vice President, Cloud Services at Frost & Sullivan and Kelly Ireland, Founder and CEO at CB Technologies.

We discuss why hybrid cloud and change management are still important for companies in the second decade of modern cloud. With the exception of some startups born in the cloud, many enterprises never accepted the fact that the cloud would replace the data center. Organizations feel this is not an “or” discussion, it’s an “and” discussion. They know things need to be on the public cloud, and they need their own on-premises data center as well. Therefore, enterprises sift through their workloads and data to determine what makes sense to migrate to public cloud and what makes sense to keep on private cloud. CTP has found that 30-40% of workloads are not economically viable to move to the public cloud. Organizations can build net new applications on the cloud, but they shouldn’t move everything. For this reason, hybrid cloud will continue to be a strong focus for enterprises in the future.

How will the cloud change IT within companies? 

Businesses are now technology dependent. Every aspect of how a business operates is becoming technology-based. The technology allows a new way for enterprises to innovate and with that, a need for IT organizations to transform. Separate from the need to understand what the cloud is and how to optimize it, these IT teams must also draw on new skill sets professionally. IT organizations are struggling with how to become a customer service-oriented organization to internal line of business clients. 53% of companies are concerned about the changing roles of IT employees. There is a lot of change management that is required that has nothing to do with the technology, but more to do with how it’s delivered to the business.

Jul 6, 2017

Our guest on the podcast this week is Brian Gracely, Director of Product Strategy at Red Hat.

Brian discusses a small bank in Ohio and how they have a business use-case for Kubernetes. Because Kubernetes was built to manage Google-sized technology, it is surprising that there is a reason to apply it to a small brick and mortar bank just starting with web and mobile. What they noticed in the switch is that because customers get paid on Fridays and are more likely to check if they got paid on their personal mobile device, as soon as they launched mobile their Friday traffic soared. For just one fifth of business days, Fridays received ten times the amount of traffic as any other day. This unexpected spiky traffic pattern was a great use-case for Kubernetes. Kubernetes was built to deal with problems like that, even at small businesses.

We also look at the state of the big three orchestration engines: Kubernetes, Mesos, and Swarm. Kubernetes and Mesos began as internal projects from larger companies. Mesos began in 2014 as a container scheduler Twitter was  using to manage its own containers. They released it as open-source, so a community began to form around that. It focused on big data elements because of their applications to Twitter. To this day, Mesos is still preferred to run big data applications compared to the other two. Kubernetes began as Google’s Borg technology, used internally then released open-source. It was focused on the 80/20 type of use-cases such as batch use-cases and container use-cases. What happens when open-source solutions are released is that the community flocks to one and Kubernetes won more of the community than any solution. With a strong community, Kubernetes is better suited to work with many different types of applications. Last, Docker came out with Swarm to compete in the market. They keep things as simple as possible to get a few containers clustered together. Swarm has evolved to work mostly around Docker’s data center products. Tracking this industry over time reveals how containers have evolved to having use-cases in enterprises of all sizes.

Jun 30, 2017

We discuss trends in so far in 2017 in cloud computing, DevOps, IoT, and Machine Learning. Enterprises constantly hear about the latest trends. Sometimes the industry moves too fast for them to keep up with. They still worry about moving from one system to another and starting basic cloud practices. Cloud is not core to their businesses yet. At some point, these large enterprises need to optimize the changes they have already made instead of always worrying about the next trend. They need to get better at scale to ensure their business will grow.

One interesting trend has been the changing landscape of service-oriented architecture (SOA) and the more modern microservices. Lori wrote a blogpost in 2008 on the exaggerated death of SOA. In revisiting the post recently, she concludes that SOA is alive. The industry now focuses on microservices, which are action-oriented. Actions like logging out, logging in, checking statuses, purchasing, checking carts are now standard. These services link together to make experiences for users. Perhaps a better descriptor of SOA today is event-driven service architecture. Revisiting thought leadership from 2008 has shown how far the industry has come and how far it will go in the next 10 years.

Jun 19, 2017

Our guest on the podcast this week is Tatiana Lavrentieva, Cloud Solutions Architect at Cloud Technology Partners.

We discuss how Microsoft Azure is catching up with AWS. It seems to become more compelling each year with new services like Azure Stack and Azure Container Service. In 2016, AWS grew at the same pace as the market, but Microsoft Azure grew much faster. Microsoft launched Azure 5-6 years after AWS, so AWS has an enormous lead. Now, Microsoft Azure is at least competitive with AWS in every area. This allows the two to compete over pricing. Enterprises do not automatically choose AWS anymore. They now research what the right cloud is for the organization. Many organizations are also embracing multi-cloud strategies. This means using feature capabilities of multiple public clouds for different pieces of the enterprise. For instance, it saves money to run Microsoft services on Azure, so that is often a feature that gets separated in cloud strategy.

Jun 8, 2017

Our guest on the podcast this week is Mike Bainbridge, digital technologist, speaker, and cloud expert. We discuss digital innovation and the disruptive business models that rely on it. Digital disruption comes up a lot today when thinking about the startups that challenge the way things have always been done. AirBnB and Uber are technology companies at their core, not transportation and hospitality companies. We look at AI and how it will affect jobs of the future. Last, we discuss vendor lock-in and when it can be a good idea to go all in on one vendor. 

Jun 1, 2017

We discuss the foundational traits of serverless computing, including scaling and provisioning, cost precision, high availabilty and more. We also look at what Mike considers to be the main benefits of serverless, from costs based upon specific usage to faster time to market. Enterprises can now build entirely new products in hours or days because so much of the inherent complexity has now been commoditized by the vendors.

We also look at how large enterprises can get started down the serverless path. Once your organization is consuming some amount of public cloud, it’s easy to get your feet wet slowly. It’s low commitment and low cost to try the technology out in areas of your ecosystem that aren’t mission critical. Enterprises that want to leverage serverless technology should embrace visibility and monitoring, autonomy and automation.

1 2 Next »