Curiosity drives cloud computing

I like asking questions and I like getting good answers even better. It is because of that, I now have a love / hate relationship with search engines. Most of the time they give me a 50% answer, a kind of direction, a suggestion, a kind of coaching to the real answer. It is like the joke about the consultant; “the right answer must be in there somewhere, because he or she gives me so many responses”.

PH03797IIn spite of all kind of promises, search engines have not really increased their intelligence. Complex questions with multiple variables are still nearly impossible to get answered and the suggestions to improve my question are mostly about my spelling or because the search engine would have liked a different subject to be questioned on.

So nothing really good is coming from search engines then? Well most arguably search engines have brought us cloud computing and a very powerful access to lots and lots and lots of data, otherwise known as ‘the world wide web’.

No wonder I envision that powerful access and cloud computing are the two most important values we want to keep while increasing the capacity and intelligence to do real analytics on large data sets.

In a whitepaper of the Atos Scientific Community, these 2 elements are explored in great depth:

  • Data Analytics needs cloud computing to create an “Analytics as a Service” – model because that model addresses in the best way how people and organizations want to use analytics.
  • This Data Analytics as a Service – model (DAaaS) should not behave as an application, but it should be available as a platform for application development.

The first statement on the cloud computing needs suggests we can expect analytics to become easily deployed, widely accessible and not depending on deep investments by single organizations; ‘as a service’ implies relatively low cost and certainly a flexible usage model.

The second statement about the platform capability of data analytics however, has far reaching consequences for the way we implement and build the analytic capabilities for large data collections.

Architecturally, and due to the intrinsic complexities of analytical processes, the implementation of DAaaS represents an important set of challenges, as it is more similar to a flexible Platform as a Service (PaaS) solution than a more “fixed” Software as a Service (SaaS) application

It is relatively easy to implement a single application that will give you an answer to a complex question; many of the applications for mobile devices are built on this model (take for example the many applications for public transport departure, arrival times and connections).

This “1-application-1-question” approach is in my opinion not a sustainable business model for business environments; we need some kind of workbench and toolkit that is based on a stable and well defined service.

The white paper describes a proof of concept that has explored such an environment for re-usability, cloud aspects and flexibility. It also points to the technology used and how the technology can work together to create ‘Data Analytics as a Service’.


This blog post was previously published at http://blog.atos.net/blog/2013/03/25/watch-this-space-curiosity-drives-cloud-computing/


<

Would you like a cup of IT

The change in the IT landscape brought about through the introduction of Cloud Computing is now driving a next generation of IT enablement. You might call it Cloud 2.0, but the term 'Liquid IT' much better covers what is being developed.

In a recently published white paper by the Atos Scientific Community, Liquid IT is positioned not only as a technology or architecture; it is also very much focused on the results of this change on the business you are doing day to day with your customer(s).

"A journey towards Liquid IT is actually rather subtle, and it is much more than a technology journey"

The paper explains in detail how the introduction of more flexible IT provisioning, now done in real time allows for financial transparency and agility. A zero latency provisioning and decommissioning model, complete with genuine utility pricing based on actual resources consumed, enables us to drive the optimal blend of minimizing cost and maximizing agility. Right-sizing capabilities and capacity all of the time to the needs of the users will impact your customer relationship – but, very important, designing such a systems starts with understanding the business needs.

"Liquid IT starts from the business needs: speed, savings, flexibility, and ease of use"

Existing examples of extreme flexibility in IT (think gMail, Hotmail or other consumer oriented cloud offerings) have had to balance between standardization and scale. The more standard the offering, the more results in scaling can be achieved. This has always been a difficult scenario for more business oriented applications. The paper postulates that with proper care for business needs and the right architecture, similar flexibility is achievable for business processes.

Such a journey to 'Liquid IT' indeed includes tough choices in technology and organization, but also forces the providers of such an environment to have an in-depth look at the financial drivers in the IT provisioning and the IT consumption landscape.

"The objectives of financial transparency dictate that all IT services are associated with agreed processes for allocation, charging and invoicing"

There are two other aspects that need to change in parallel with this move to more agility in IT; the role of the CIO will evolve and the SLA that he is either buying or selling will change accordingly.

Change management will transform into Information Management as the use of IT as a business enabler is no longer the concern of the CIO. IT benchmarking will become a more and more important tool to measure the level of achieved agility for the business owners. The focus on the contribution to the business performance will be measured and needs to be managed in line with business forecasts.

The white paper authors conclude that "Business agility is the main result of Liquid IT" – sounds like a plan!

This blog post was previously published at http://blog.atos.net/blog/2013/03/08/watch-this-space-would-you-like-a-cup-of-it/


 

Three reasons to change the Internet now

Times are changing and we all need to adapt. The internet has had a major impact on all of our lives and continues to be a growing force in all aspects of society; in personal interactions, in knowledge management and inthe way we do business.

In a whitepaper by the Atos Scientific Community, this evolution of ‘the net’ is described and put in the context of the additional functionality we now expect from our interactions on the internet. The authors challenge the current technology stack that is making up the many, many connections and network capabilities that have to be served to make the internet do what it is supposed to do.

The topology of the Internet has evolved through economic and technological optimization decisions to a flatter structure where major content providers and distributors get as close as possible to the access networks used by their customers

There seem to be good reasons to have a good look at this technology evolution and make some choices to  continue to enjoy the internet:

  1. Because of the cloud computing trend, more and more traffic is concentrated between several internet powerhouses; Facebook, Amazon, Google and Microsoft. The distributed nature of the original internet simply does not exist anymore.
  2. Because of the huge increase in mobile internet usage, the way that information is accessed, changed and presented is different from the past models – the existing networking functionality is not optimized for this type of usage.
  3. Future scenarios predict that through the assignment of an IP address to about any device you can think of we will create a huge peer-to-peer network, where human interaction will be only a small portion of all connections; “the internet of things”. The current internet technology is not designed for this.

These changes raise some fundamental questions and these are described in more details the paper. Most noticeable the authors bring our attention to the fundamental nature of the internet as it is built at the moment, a decentralized web of processing and access points.

On the long run, the question is raised whether the Internet will durably follow a concentration trend driving it towards a more centralized network or if we will see a new wave of decentralization.”

The whitepaper  dives into the technology of the internet and shows where we are facing potential bottlenecks. 


[This blog post is a repost of http://blog.atos.net/blog/2012/12/03/watch-this-space-three-reasons-to-change-the-internet-now/ ]


 

How big is your robot?

What do you get when you combine cloud computing, social networking, big data and modern day engineering? You get a kick-ass robot. This was my first thought when I finished reading a published whitepaper by the Atos scientific community on the topic of robots.

Central in the paper is the question: “Where is the mind of the future robot?”, and by outlining the concept of a robot that can utilize everything that is available in cyberspace you may find it difficult to answer that question.

Today it is hard to predict where on earth all of the data about you is stored in the cloud and we have never been able to communicate more easily. It is easy to see that robots will be everywhere, able to utilize all available information.
This will lead to a new class in robot persona’s and capabilities.

Once the robot is part of a social network, it could virtually interact with humans as well and thus start truly mimicking human behavior.


When I was (much) younger we had a program on our home computer that was called ‘Eliza’. This program would behave as an electronic psychiatrist. It had some limited learning capabilities and some clever language skills to ‘trick’ you in having an actual conversation.

If you would type things like “I hate talking to a computer”, Eliza would answer with “Hate seems to be important to you, can you explain that?”

If we now multiply the capabilities of this ‘Eliza’ by a thousand or more (using cloud computing scalability) and bring in the analytics of all of your ‘likes’ or ‘diggs’ or even the behaviour of your friends, combined with knowledge about your locations and multiply that by analysing all the things you did 5 years ago, 10 years ago and today …. Well I think you get the picture.

The more a future robot knows or has access to, the more it will be able to fulfil his role in supporting us. This may not sit well with everybody, but if we utilize this capability in a clever way, I believe we can benefit.

Especially if we also take into account that a robot can take different forms, could exist virtually or maybe even be in multiple locations at the same time, with access to the right information and computing power to use that to our benefit. The whitepaper describes some of these scenarios and puts it in the perspective of the role of IT providers and systems integrators.

Based on my reading of the whitepaper I was thinking that maybe the statement ‘I cannot be in two places at the same time’ will soon become a thing of the past.



[This blog post is a repost of http://blog.atos.net/blog/2012/11/26/watch-this-space-how-big-is-your-robot/ ]


 

The PaaS cloud computing lock-in and how to avoid it

Cloud Computing changed from choosing an easy solution, into making a difficult decision.

The reason is the proliferation of cloud offerings at all layers; today we do not only find ‘everything-as-a-service’ cloud solutions, but also ‘everything-is-tailored-for-your-specific-situation-as-a-service’ tagged as cloud solutions.

Is this good? I do not think so.

My main objection is that you will end up with a cloud solution that is no different than any solution you have previously designed and installed yourself, at a cheaper rate and lower quality SLA.

True cloud solutions should not only focus on cost reduction, increased agility and flexible capabilities. You should also be buying something that supports portability between the private and public computing domain, and across different vendor platforms.

In early cloud solutions, mainly the ones focussing on Infrastructure-as-a-service, this portability has been heavily debated (remember the ‘Open Cloud Manifesto’?) and in the end we concluded that server virtualization solved a lot of the portability issues (I am simplifying of course).

We also had Software-as-a-service and some publications showed that the portability could be addressed by looking at standardized business process definitions and data normalisation (again, I am simplifying).
Now the Atos Scientific Community has published a whitepaper that looks at the most complex form of cloud computing; Platform-as-a-service.

PaaS offerings today are diverse, but they share a vendor lock-in characteristic. As in any market for an emerging technology, there is a truly diverse array of capabilities being offered by PaaS providers, from supported programming tools (languages, frameworks, runtime environments, and databases) to various types of underlying infrastructure, even within the capabilities available for each PaaS


So a common characteristic that can be extracted of all this diversity is the fact of PaaS users currently are being bound to the specific platform they use, making the portability of their software (and data) created on top of these platforms difficult.

As a result we see a slow adoption of PaaS in the enterprise; only those groups that have a very well defined end-user group are looking at PaaS – and mostly for the wrong reason: ‘just’ cost saving through standardization.

In the Atos Scientific Community whitepaper they are identified as:

Two primary user groups which benefit from using Cloud at the Platform as a Service level: Enterprises with their own internal software development activities and ISVs interested in selling SaaS services on top of a hosted PaaS.”


The current situation where PaaS is mostly resulting in a vendor lock-in scenarios is holding back the full potential for applications on a PaaS.

By introducing a general purpose PaaS, we would allow a comprehensive, open, flexible, and interoperable solution that simplifies the process of developing, deploying, integrating, and managing applications both in public and private clouds.

Such an architecture is proposed and explained in detail in the whitepaper; it describes the desired capabilities and building blocks that need to be established and it also offers an analysis of market trends and existing solutions, in order to establish a future vision and direction for PaaS, as well as outlining the business potential of such a solution.

We can all continue to feel positive about the power and the business potential of cloud computing.

Changing your cost base from capex to opex, increasing your speed in your go-to-market strategies and the flexibility in capacity and location are very important for your business.

We should not however confuse vendor specific solutions with cloud solutions only because they promise flexibility in cost and easy deployment; being able to shift and shop around is always better – also in cloud computing.


This blog post is a repost of http://blog.atos.net/sc/2012/10/15/watch-this-space-the-paas-cloud-computing-lock-in-and-how-to-avoid-it/