Your Future; Now available in real-time

cobblestones (road)Imagine you have an automatically and real-time updated agenda – it continuously adapts your schedule to meetings taking longer, predicts and updates in real-time your travel-time to the next meetings and will adapt your schedule because it ‘knows’ that typically any meeting with your best client always takes 30 minutes longer than you originally plan it for.

A proof of concept conducted by the Atos Scientific Community looked at this aspect of predictability and took the data of the traffic in the city of Berlin to see if it was possible to do real time traffic forecasting (RTTF). The result is in a recently published white paper.

  “RTTF enables a prediction (within 1 minute) of sensor data streams for the immediate future (up to four hours) and provides traffic condition classification for the upcoming time period based on the forecasted data.”

“The forecast provides a suitable time span for proactively managing upcoming incidents even before they appear.”

The team took a radical different approach to the challenges of today’s traffic management. Instead of proposing another reactive traffic management IT system with some smart analytics, the team targeted successfully a proactive traffic management approach which provides analytics solutions to predict critical events in advance before they appear.  Using historic data and artificial neuron network technology, predictions are created for the intermediate future and utilized to determine the traffic status of the upcoming next four hours. Based on that information, actions can be taken proactively to mitigate or avoid future upcoming events. Utilizing the software and bringing in data scientists with an understanding of the context was the next step. This helped in defining the right parameters and a pattern based strategy (PBS) in place.

“Being able to identify patterns out of the existing data, model them into patterns and come up with a system that can provide reliable predictions is a remarkable achievement in itself, but the true value of PBS is being able to apply such capabilities to strategy definition and decision making.”

Working with the subject matter experts the team identified multiple models that were then consequently implemented in the software. The models are important, they avoid that you are trapped into simplification; when a car is driving slowly, it can be because of a traffic jam, but it can also be an older person driving more carefully.

By introducing the concept of ‘flow’ – the number of vehicles passing a sensor each hour – the team could identify 4 different states, which were in themselves also parameterized by looking at road capacity, speed limits, etc. This information is then fed into a look-up table based complex event processing engine in order to predict, within 1 minute, the traffic situation at given locations.

Because in real-life the historic data is continuously refreshed with the actual events of the past time, the system will be able to predict in real-time the situation on the road.

The proof of concept clearly showed that a self-learning system, combined with a complex event processing unit and the help of some subject matter expert data scientist can accurately predict the future – the white paper shows this in some great details.

  “Real Time Traffic Forecasting is an excellent example of how data sources and identified patterns can be exploited to gain insights and to develop proactive strategies to deal with upcoming events and incidents. It enables a short term view into the future which is long enough to act on predicted incidents rather than react on occurring ones”

For me this proof of concept shows the benefits of data analytics in everyday life, and I am looking forward to this future.


This blog post was previously published at http://blog.atos.net/blog/2013/12/12/watch-this-space-your-future-now-available-in-real-time/ 


Three (and more) disruptive changes in the media landscape

 

Changes in media landscape (newspaper)I am a news junkie; I eat, drink, snack, swallow and dine copiously on any news source. My starter is the newspaper in the morning, followed by a quick look at some of my favorites online. During the day, when work allows it, I will visit some other sites and during lunch I might have a second look at the morning newspaper. The evening paper I read after dinner and around 8 or 10, I will watch the evening news on television. Just before turning in, I will check my usual favorite websites again. About 3 or 4 times a week I will check out new background stories on YouTube, TED or some local news sites – they will mostly serve the news in a video format, which is a good break from just reading about stuff.

Still, I am apparently an old fashioned guy:

“Smart mobility is opening up the media market in two dimensions. It is enabling personalized engagement with audience segments previously un-reached, and it is creating the opportunity for a near unlimited range of multi-screen services that enable the users to interact via the second screen.”

In a white paper published by the Atos Scientific community about disruptive changes in media, an overview is given of the impact of these changes and the increased use of smart mobile devices is the first one mentioned; I myself still like the paper format of the news, but am also increasingly drawn to using my phone or tablet.

 “Socially connected dynamic content creates the opportunity for mass media experiences that are unique to any social graph.”

Secondly the authors indicate a strong increase in the interactions between producers and consumers of news. This need for direct interaction was already existing with radio – many “shock-jocks” have chosen this format to increase the impact of their radio-shows in the past, but the social interaction allows for a much larger amount of interactions and sometimes, through the interactions, creates its own new news stories. We have seen this when web logs publish videos of a bank-robber or some hooligan beating up innocent people and the readers actively participate to find the identity of these persons.

“Any individual has the opportunity to become their own broadcaster, and there are millions of examples of successful user generated channels (…). In this new world, the sole barriers to entry are an idea and basic production skills.”

Thirdly the paper explains the impact of user generated content. This used to be a very modest part of the media landscape and most often initiated by the professionals – for example CNN or BBC asking their viewers to upload pictures and movies, but is now exploding into semi-professional channels on video services like YouTube and Vimeo. With the rise of consumer friendly video equipment paired with HD quality, it is no longer expensive to be a creator and I expect that when technologies like Google Glass become mainstream we will see (no pun intended) an ever bigger growth in user generated content.

The paper shows at least 4 more disruptive changes in the short/medium term, which you will need to discover when it is finally published (hint: Intellectual property, real time advertising, personalization, network capacity).


This blog post was previously published at http://blog.atos.net/blog/2013/11/21/watch-this-space-three-and-more-disruptive-changes-in-the-media-landscape/  


 

Curiosity drives cloud computing

I like asking questions and I like getting good answers even better. It is because of that, I now have a love / hate relationship with search engines. Most of the time they give me a 50% answer, a kind of direction, a suggestion, a kind of coaching to the real answer. It is like the joke about the consultant; “the right answer must be in there somewhere, because he or she gives me so many responses”.

PH03797IIn spite of all kind of promises, search engines have not really increased their intelligence. Complex questions with multiple variables are still nearly impossible to get answered and the suggestions to improve my question are mostly about my spelling or because the search engine would have liked a different subject to be questioned on.

So nothing really good is coming from search engines then? Well most arguably search engines have brought us cloud computing and a very powerful access to lots and lots and lots of data, otherwise known as ‘the world wide web’.

No wonder I envision that powerful access and cloud computing are the two most important values we want to keep while increasing the capacity and intelligence to do real analytics on large data sets.

In a whitepaper of the Atos Scientific Community, these 2 elements are explored in great depth:

  • Data Analytics needs cloud computing to create an “Analytics as a Service” – model because that model addresses in the best way how people and organizations want to use analytics.
  • This Data Analytics as a Service – model (DAaaS) should not behave as an application, but it should be available as a platform for application development.

The first statement on the cloud computing needs suggests we can expect analytics to become easily deployed, widely accessible and not depending on deep investments by single organizations; ‘as a service’ implies relatively low cost and certainly a flexible usage model.

The second statement about the platform capability of data analytics however, has far reaching consequences for the way we implement and build the analytic capabilities for large data collections.

Architecturally, and due to the intrinsic complexities of analytical processes, the implementation of DAaaS represents an important set of challenges, as it is more similar to a flexible Platform as a Service (PaaS) solution than a more “fixed” Software as a Service (SaaS) application

It is relatively easy to implement a single application that will give you an answer to a complex question; many of the applications for mobile devices are built on this model (take for example the many applications for public transport departure, arrival times and connections).

This “1-application-1-question” approach is in my opinion not a sustainable business model for business environments; we need some kind of workbench and toolkit that is based on a stable and well defined service.

The white paper describes a proof of concept that has explored such an environment for re-usability, cloud aspects and flexibility. It also points to the technology used and how the technology can work together to create ‘Data Analytics as a Service’.


This blog post was previously published at http://blog.atos.net/blog/2013/03/25/watch-this-space-curiosity-drives-cloud-computing/


<

A new business model in 3 easy steps

If you like curly fries you are probably intelligent (1).

This insight comes from the University of Cambridge. The researchers analysed the data from Facebook to show that ‘surprisingly accurate estimates of Facebook users’ race, age, IQ, sexuality, personality, substance use and political views can be inferred from the analysis of only their Facebook Likes’.

The possibility to collect large amounts of data from everyday activities by people, factory processes, trains, cars, weather and just about anything else that can be measured, monitored or otherwise observed is a topic that has been discussed in our blogs many times.

Sometimes indicated as ‘The Internet of Things’ or, with a different view ‘Big Data’ or ‘Total Data’, the collection and analysis of data has been a topic for technology observations and a source of concern and a initiator for new technology opportunities.

This blog is not about the concerns, nor is it about the new technologies. Instead it is about a view introduced by a new white paper by the Atos Scientific Community called “The Economy of Internet Applications”; a paper that gives us a different, more economic, view on these new opportunities.

Let’s take a look at a car manufacturer. The car he (or she) builds will contain many sensors and the data from those sensors will support the manufacturer to enable better repairs for that one car, it can provide data from many cars for an analysis to build a better car in the future and it can show information to the user of the car (speed, mileage, gas). The driver generates the data (if a car is not driven, there is no data) and both the driver and the car manufacturer profit from the result.

Now pay attention, because something important is happening: When the car manufacturer provides the data of the driver and the car combined to an insurance company, a new business model is created.

The user still puts in the data by using the car, the car manufacturer sensors in the car still collects the data, but the insurance company gets the possibility to do a better risk analysis on the driver’s behaviour and the cars safety record.

This would allow the insurance company to give the driver a better deal on his insurance, or sponsor some safety equipment in the car so there is less risk for big insurance claims in health or property damage.

It would allow the car manufacturer to create more value from data they already have collected and it would give the driver additional benefits in lower insurance payments or improved safeties.

What just happened is that we created a multi-sided market and it is happening everywhere.

“If you don’t pay for the product, you are the product”

The white paper explains it in more detail but the bottom line is that due to new capabilities in technology, additional data can easily be collected.

This data can be of value for different companies participating in such a data collection and the associated analytics platform.

Based on the economic theory of multisided markets, the different participants can influence each other in a positive way, especially cross sector (the so called network effect).

So there you have it, the simple recipe for a new business model:

  1. Find a place where data is generated. This could be in any business or consumer oriented environment. Understand who is generating the data and why.
  2. Research how: a. that data or the information in that data, can give your business a benefit and b. how data that you own or generate yourself, can enrich the data from the other parties.
  3. Negotiate the usage of the data by yourself or the provisioning of your data to the other parties.

In the end this is about creating multiple win scenarios that are based on bringing multiple data sources together. The manufacturer wins because it improves his product, the service provider wins because it can improve the service and the consumer wins because he is receiving both a better product and a more tailored service.

Some have said that Big Data resembles the gold rush (2) many years ago. Everybody is doing it and it seems very simple; just dig in and find the gold – it was even called ‘data-mining’.

In reality, with data nowadays, it is even better, if you create or participate in the right multi-sided market, that data, and thus the value, will be created for you. 

(1) http://www.cam.ac.uk/research/news/digital-records-could-expose-intimate-details-and-personality-traits-of-millions

(2) http://www.forbes.com/sites/bradpeters/2012/06/21/the-big-data-gold-rush/


This blog post was previously published at http://blog.atos.net/blog/2013/03/18/watch-this-space-a-new-business-model-in-3-easy-steps/


How big is your robot?

What do you get when you combine cloud computing, social networking, big data and modern day engineering? You get a kick-ass robot. This was my first thought when I finished reading a published whitepaper by the Atos scientific community on the topic of robots.

Central in the paper is the question: “Where is the mind of the future robot?”, and by outlining the concept of a robot that can utilize everything that is available in cyberspace you may find it difficult to answer that question.

Today it is hard to predict where on earth all of the data about you is stored in the cloud and we have never been able to communicate more easily. It is easy to see that robots will be everywhere, able to utilize all available information.
This will lead to a new class in robot persona’s and capabilities.

Once the robot is part of a social network, it could virtually interact with humans as well and thus start truly mimicking human behavior.


When I was (much) younger we had a program on our home computer that was called ‘Eliza’. This program would behave as an electronic psychiatrist. It had some limited learning capabilities and some clever language skills to ‘trick’ you in having an actual conversation.

If you would type things like “I hate talking to a computer”, Eliza would answer with “Hate seems to be important to you, can you explain that?”

If we now multiply the capabilities of this ‘Eliza’ by a thousand or more (using cloud computing scalability) and bring in the analytics of all of your ‘likes’ or ‘diggs’ or even the behaviour of your friends, combined with knowledge about your locations and multiply that by analysing all the things you did 5 years ago, 10 years ago and today …. Well I think you get the picture.

The more a future robot knows or has access to, the more it will be able to fulfil his role in supporting us. This may not sit well with everybody, but if we utilize this capability in a clever way, I believe we can benefit.

Especially if we also take into account that a robot can take different forms, could exist virtually or maybe even be in multiple locations at the same time, with access to the right information and computing power to use that to our benefit. The whitepaper describes some of these scenarios and puts it in the perspective of the role of IT providers and systems integrators.

Based on my reading of the whitepaper I was thinking that maybe the statement ‘I cannot be in two places at the same time’ will soon become a thing of the past.



[This blog post is a repost of http://blog.atos.net/blog/2012/11/26/watch-this-space-how-big-is-your-robot/ ]