The PaaS cloud computing lock-in and how to avoid it

Cloud Computing changed from choosing an easy solution, into making a difficult decision.

The reason is the proliferation of cloud offerings at all layers; today we do not only find ‘everything-as-a-service’ cloud solutions, but also ‘everything-is-tailored-for-your-specific-situation-as-a-service’ tagged as cloud solutions.

Is this good? I do not think so.

My main objection is that you will end up with a cloud solution that is no different than any solution you have previously designed and installed yourself, at a cheaper rate and lower quality SLA.

True cloud solutions should not only focus on cost reduction, increased agility and flexible capabilities. You should also be buying something that supports portability between the private and public computing domain, and across different vendor platforms.

In early cloud solutions, mainly the ones focussing on Infrastructure-as-a-service, this portability has been heavily debated (remember the ‘Open Cloud Manifesto’?) and in the end we concluded that server virtualization solved a lot of the portability issues (I am simplifying of course).

We also had Software-as-a-service and some publications showed that the portability could be addressed by looking at standardized business process definitions and data normalisation (again, I am simplifying).
Now the Atos Scientific Community has published a whitepaper that looks at the most complex form of cloud computing; Platform-as-a-service.

PaaS offerings today are diverse, but they share a vendor lock-in characteristic. As in any market for an emerging technology, there is a truly diverse array of capabilities being offered by PaaS providers, from supported programming tools (languages, frameworks, runtime environments, and databases) to various types of underlying infrastructure, even within the capabilities available for each PaaS


So a common characteristic that can be extracted of all this diversity is the fact of PaaS users currently are being bound to the specific platform they use, making the portability of their software (and data) created on top of these platforms difficult.

As a result we see a slow adoption of PaaS in the enterprise; only those groups that have a very well defined end-user group are looking at PaaS – and mostly for the wrong reason: ‘just’ cost saving through standardization.

In the Atos Scientific Community whitepaper they are identified as:

Two primary user groups which benefit from using Cloud at the Platform as a Service level: Enterprises with their own internal software development activities and ISVs interested in selling SaaS services on top of a hosted PaaS.”


The current situation where PaaS is mostly resulting in a vendor lock-in scenarios is holding back the full potential for applications on a PaaS.

By introducing a general purpose PaaS, we would allow a comprehensive, open, flexible, and interoperable solution that simplifies the process of developing, deploying, integrating, and managing applications both in public and private clouds.

Such an architecture is proposed and explained in detail in the whitepaper; it describes the desired capabilities and building blocks that need to be established and it also offers an analysis of market trends and existing solutions, in order to establish a future vision and direction for PaaS, as well as outlining the business potential of such a solution.

We can all continue to feel positive about the power and the business potential of cloud computing.

Changing your cost base from capex to opex, increasing your speed in your go-to-market strategies and the flexibility in capacity and location are very important for your business.

We should not however confuse vendor specific solutions with cloud solutions only because they promise flexibility in cost and easy deployment; being able to shift and shop around is always better – also in cloud computing.


This blog post is a repost of http://blog.atos.net/sc/2012/10/15/watch-this-space-the-paas-cloud-computing-lock-in-and-how-to-avoid-it/


 

Choose your friends wisely

Sharing your personal information with the founders of FaceBook, MySpacePinterest, Friendster, Twitter and LinkedIn is probably something you would think about twice. The association of your private stuff with each of these networks is something you want to take very seriously.

There is an interesting tension between social networks and the concept of Privacy. Not only because some people will share what others will want to keep a secret; also because the social networks love to know more about you and continuously challenge your boundaries.

Let’s face it (pun intended) – the more you share, the more traffic you generate, the more money they make. It is that simple. So when social networks need to ‘take their responsibility’, they are acting against their nature (remember the story of the scorpion that wanted to cross the river?).

“If you are not paying for a product, you are the product being sold”

This tension between your privacy and their business model is described in detail in a recent whitepaper by the Atos Scientific Community (find it here) and they conclude:

“Social networking sites have been traditionally reluctant to take into consideration the data privacy concerns brought up by users and public authorities.”

The paper continues to look into the legal aspects of this subject and describes how we are dealing with the challenge of privacy in social networks. Several examples are cited and explained against the existing rules in Europe and the US.

In addition the paper goes beyond the legal aspects and also explores the technical aspects of privacy in social networks. Most interesting is their observation that there is not a single technology that will support the need for privacy:

“Privacy needs, inside and outside social networks, are quite different and should be tackled using specifically tailored technologies.”

You can imagine that privacy related to personal finance, banking information or on the other hand your holiday pictures are totally different datasets that need a different approach. The whitepaper shows this and explains how a difference can be made; it even explores the possibility of a ‘safe’ social network.

A full analysis is done of several technologies that can support a safer social network and allow for better control by the end-user. Also a word of caution is expressed by the authors on the possibilities of cross authorizing using for example your Facebook account log in on other sites.

Finally the observation is that the social networking domain, in which vendors and end-users struggle to get a grip on privacy, is in fact not ignoring the issue. So there is hope – but that does not change the fact you still need to think twice before you hit ‘Like’.


This blog post is a repost of http://blog.atos.net/sc/2012/10/08/watch-this-space-choose-your-friends-wisely/ 


6 reasons why Open Innovation is happening now

When is the last time you watched Sesame Street?

I was thinking about a wonderful song that has been going around a long time on collaboration and co-operation. (see Sesame Street – "Cooperation Makes It Happen" – first performance in episode 2040 / March 1985).

It seems that this way of creating new things is already promoted very early in our childhood and part of our collective memory of positive actions.


So, if Open Innovation is about collaboration and co-operation, you would think we should be doing it more often? Well, an upcoming whitepaper by the Atos Scientific Community explores this question and comes to some interesting conclusions.

"…innovation in the 21st Century is increasingly open, collaborative, multi-disciplinary, and global, resulting in greater opportunities, and challenges for traditional R&D approaches."


Apparently there are a couple of changes in the current society that enable Open Innovation to be a more natural way of doing things (together).

Firstly there is the increased pressure on cost and the need to go to the market with new products and services much faster than before; having access to a bigger pool of knowledge without having to invest a lot of time building that same pool, supports the adoption of Open Innovation.

"The key limitation of Closed Innovation is the lack of leverage of external knowledge and expertise in unknown emerging fields for internal innovation processes."


Secondly we now see new business models emerging that allow for support of joint development of new products and services. Previously it was quite difficult to do so, but there are more and more support companies that will take care of the groundwork – Amazon for flexible compute power and SalesForce for flexible marketing activities just to name a few well known examples, but there are multiple other services available that take care of the basic services in a flexible, pay-as-you-go, get-the-size-you-need delivery models.

Number three is better approaches to intellectual property and protection of ownership. This takes away the need to keep things secret and allows for more sharing between companies. In addition the understanding and broader agreement on the different ways in which we can apply Open Source is also helping companies to take a more relaxed attitude towards collaboration.

At number four we see the increased capabilities in social networking and easy communication between different locations and companies. Setting up meetings and long distance virtual teams has become much easier.

"The walls of the enterprise are (therefore) no longer solid; ideas can filter into the innovation funnel via a ‘bi-directional, semi-permeable membrane’."


And at number five we see the increased understanding that each company holds the key to greatness hidden somewhere deep in the fabric of the organization, waiting to be discovered. While at the same time we know that we need a fresh new perspective to get this to the surface and we need the additional intelligence of somebody outside that will make it grow.

Or, probably most of all, at number six, the generation that loved, watched and learned from Sesame Street is now grown up; they know about this great song “co-operation … makes it happen” and are now putting it into action.


This blog post is a repost of http://atos.net/en-us/about_us/insights-and-innovation/thought-leadership/bin/wp_open_innovation.htm


The Power of Moving Pictures

When I came home from holiday, I connected my HD video-camera to my computer and was able to publish my recordings to YouTube in just 1 click (publishing on YouTube was actually even easier than putting it to a DVD).

Last month when I wanted to replace the hard disk in my MacBook, I found more than one detailed video on the internet with perfect instructions how to do that.

The Khan Academy has library of over 3,400 videos on everything from arithmetic to physics, finance, and history and hundreds of skills to practice academic lessons available for anybody connected to the internet.

I can set up a video conference with anybody in my company who is online, in less than a minute.

Is it any wonder that video is high on the list of technologies that are important for organizations to communicate, train, support and sell their products and services?

(Increased) revenues are hidden in better communications and in targeted communication on different channels; Television, Internet and in-company broadcasting.

The Atos Scientific Community has researched the importance of video, the major technical and business challenges that industries face, and describes the opportunities opening up for system integrators. The result will be published in an upcoming whitepaper. (here)

Video will become so omnipresent and embedded that it will be the normal medium of communication


According to the research, we will see video technology appear in every part of our communication, supported by increasingly capable mobile devices and better connectivity.

It is even expected that the importance will take over from photography and written text.

Interactive capabilities and integration with social networks provide a large potential market that has not yet been completely monetized

This newly found importance of video is certainly not yet understood by everybody and still some time will pass before the right revenue models are available – most models now are based on counting the amount of views or the potential amount of people that are reached.

Increasingly we will see the value of data (including video) to be calculated against the impact of that data in social networks, the speed of distribution and the amount of response generated.

Last but not least I believe video will grow because it will just simply increase the quality of our communications – people talking with each other on the phone or in instant-messaging (I dare not mention email here…), especially across different continents, are just missing a big element of the part where communication leads to understanding.

True understanding in any conversation only comes from actually seeing you smile.

 



This blog post is a repost of http://blog.atos.net/sc/2012/09/18/watch-this-space-the-power-of-moving-pictures/

 


  

Cloud Orchestration and your Interoperability Strategy

English: Diagram showing three main types of c...

English: Diagram showing three main types of cloud computing (public/external, hybrid, private/internal) (Photo credit: Wikipedia)

Life for a CIO or CTO used to be complex.

Then came Cloud Computing.

Now life is even more complex.

A year ago the Atos Scientific Community published a whitepaper on Cloud Computing. In the paper the concept was explained and we predicted that interoperability among clouds was going to be a major headache.

The paper also showed the result of a proof of concept in which we connected multiple private and public clouds to perform a single business workflow.

The hypothesis of the paper is that organizations will end up with multiple cloud environments:

“This will be driven by what is most fit for purpose for any given application (or part of it), based on an SLA trade-off between cost and business criticality. The corporate application landscape will therefore also fragment into those layers and into many business processes, requiring access to multiple applications and data connections that will need to span those layers. Unless enterprises consider these implications in advance, they risk building a heterogeneous IT infrastructure, only to discover that their key business processes can no longer be plugged together or supported.”

I think the authors (full disclosure: I was one of them) were right in their assumption and the situation nowadays is not any better than 1 year ago.

There are a couple of reasons I wanted to bring this to your attention again.

First because the paper has been re-launched on www.Atos.net , secondly because the paper has been accepted as a submission to the yearly Internet conference WWW2012 (www.www2012.org ) and thirdly because on February 10, 2012 the United Nations announced they will take initiatives to “Aim for Cloud Interoperability”.

At least for me this was a surprise as I saw the UN mainly as an intergovernmental body looking to create lasting world peace.

But if you think this through it actually makes sense. The UN’s International Telecommunication Union (source: www.itu.int) “is committed to connecting all the world’s people – wherever they live and whatever their means. Through our work, we protect and support everyone’s fundamental right to communicate.” And there is a lot more on vision, collaboration, achieving standards etcetera, etcetera.

During the January meeting of the Telecommunication Standardization Advisory Group an initiative has been taken to start a study on this subject of cloud interoperability.

Apparently this was done on request of “leading CTO’s” to “investigate the standardization landscape in the cloud computing market and pursue standards to lead to further commoditization and interoperability of clouds”. (Source: ITU-T Newslog January 17, 2012).

This is good news and it is not the only initiative that came out recently.

The Organization for the Advancement of Structured Information Standards (OASIS), previously known as SGML OPEN, has started a technical committee on “Topology and Orchestration Specification for Cloud Applications” (TOSCA) aiming to make it easier to deploy cloud applications without vendor lock-in, “…while maintaining application requirements for security, governance and compliance” (source: www.oasis-open.org/news/pr/tosca-tc).

The newly formed committee is supported by vendors like CA, Cisco, EMC, IBM, Red Hat, SAP and others.

In addition I recently Googled “Cloud interoperability” and when filtering for the last month, I got about 60.000 hits, so I can say that the subject is very well alive on the internet.

The point of all this? I firmly believe that in addition to your cloud strategy, you need to have a cloud interoperability strategy. You need to be aware of emerging standards and you need to talk to your vendor about it.

It is inevitable that some parts of your business will be run “in the cloud”; nowadays it is not only important how to get there, but also how to (securely) stay there while maintaining the flexibility to move around, interconnect your processes and still take end-to-end responsibility.

Like I said at the beginning. Life gets more complex.

 


[This blog post is a repost of http://blog.atos.net/sc/2012/03/02/cloud-orchestration-and-your-interoperability-strategy/]

The whitepaper can be downloaded here