Solving information overload in a digital age

By , 4 April 2014 at 16:50
Solving information overload in a digital age
Digital Life

Solving information overload in a digital age

By , 4 April 2014 at 16:50
Tags:
OPINION
  • Since the earliest of times, humans have been interested in recording their life experiences for future reference and for storytelling purposes. The task of recording experiences – particularly through image and video capture — has never before been as easy as it is today.

4 April 2014: As high-resolution digital still and video cameras become increasingly pervasive, unprecedented amounts of multimedia are being downloaded to personal hard drives and uploaded to online social networks on a daily basis. As a result, there has been an exponential increase in the overall number of photos and videos takenand shared by users. This dramatic growth in the amount of digital personal media (user generated content) has led to increasingly large media libraries in local hard drives and/or online repositories, such as Flickr, Picasa Web Album or Facebook.

Unfortunately and as a consequence of this multimedia-rich world, digital information overload is becoming an increasing concern.

In Telefonica R&D’s research teams, we are working to alleviate this problem. We have developed a novel multimedia storytelling system to help users both in their multimedia organization tasks and in the automatic selection of multimedia content for storytelling purposes, i.e. summarizing a collection of images or videos into a multimedia story that will be shared with other people, most likely through a social network.

Fully automatic personal photo collection summarization for storytelling purposes is a very hard problem, since each end-user may have very different interests, tastes, photo skills, etc. In addition, meaningful and relevant photo stories require some knowledge of the social context surrounding the photos, such as who theuser and the target audience are. We believe that automatic summarization algorithms should incorporate this information.

With the advent of photo and video capabilities in online social networking sites (OSN), an increasing portion of the users’ social photo storytelling activities are migrating to these sites, where friends and family members update each other on their daily lives, recent events, trips or vacations. For instance, FaceBook is the largest online repository of personal photos in the world with more than 3 billion photos being uploaded monthly. Hence, there are opportunities to mine existing photo albums in OSN in order to automatically create relevant and meaningful photo stories for users to share online.

As opposed to some prior art in this area, neither user generated tags nor comments — that describe the photographs, either in their local or online repositories — are taken into account. In addition, no user interaction with the algorithms is expected. We follow an image analysis approach where both user context photos – i.e. images that are already available in the user’s online social networks to which the photo stories are going to be uploaded,and the collection photos — i.e., the collection of images that needs to be summarized into a story– are analyzed using image processing algorithms. As a result, a large number of features and relevant metadata is extracted from each photo and will be used in the summarisation process.

Multimedia-storytellers usually follow three steps when preparing their stories: they first choose the main characters in the story, next the main events to describe, and finally they select the media content (photos, videos) based on their relevance to the story and their aesthetic value. Note that one of the main contributions of this project is the design of computational models — both regression and classification-based — that correlate well with the human perception of the aesthetic value of images and videos.

The computational aesthetics models have been integrated into the automatic selection algorithms for multimedia storytelling, which are another important contribution of this project. We analyze the images in the collection and cluster them based on time capture, face recognition, and similarity into Characters, Acts, Scenes and Shots (followingdramaturgy and cinematography nomenclature). The photos already available in the user’s social network are also analyzed in order to mirror his/her storytelling traits, mainly around the ratio of people vs non-people photos and “who” appears in those photos, helping define the story Characters.

With the advent of photo and video capabilities in online social networking sites (OSN), an increasing portion of the users’ social photo storytelling activities are migrating to these sites, where friends and family members update each other on their daily lives, recent events, trips or vacations.

A human-centric approach has been used in all experiments to assess the quality of both the aesthetics and storytelling algorithms, such that humans have always been the final judges of our work, either by inspecting the aesthetic quality of the media or the final story generated by our storytelling platform. In an in-depth user study, we have shown that our approach can be of help (performing as well as a human-generated summary by a professional) to users in creating a first draft of a photo album to be shared online and that the users of our system are able to generate multimedia stories with significantly less effort than if starting from scratch.

In sum, the main contributions of this project can be capitalized in two: (1) novel computational media aesthetics models for both images and videos that correlate with human perception, and (2) novel media selection algorithms that are optimized for online social network multimedia storytelling purposes.

previous article

How an iTunes of data will give us control of our information

How an iTunes of data will give us control of our information
next article

Making cities smart through anonymised mobile data

Making cities smart through anonymised mobile data