I was asked to write a few words about 2011 for Marketing Magazine and Figaro Magazine. They’re bound to be wrong but here they are. The trends below are not new trends, but current trends that will become a lot more prevalent next year. No laughing at the back.
2011 will see the increased socialisation of media. TV and other mainstream media will continue to drive search and social activity. Facebook will continue to wow and frustrate. Google will hook up their services with a new social layer. Everything kicks off and brands become more fragmented.
Customers will continue to become more involved in creation as brands talk to them about what they really want and expect. Brand relationships become a form of self-expression.
There’ll be the return of campaign hubs using social plug-ins on a larger canvas with creative code as we tire of Facebook restrictions. Display advertising makes a comeback as Google gets behind it again. Facebook enters the same market. Display ads become bookmarkable and brands offer rewards and discounts for online or offline purchase.
In 2011, location matters. Brands must use digital to get smart about coupons, redemption, price check and group buying. Mobile web will also take off as we become weary of developing for multiple platforms and suffer app fatigue. HTML5 is our friend.
The ‘internet of things’ – in which the ordinary objects we encounter everyday are hooked up to the net – means channel thinking gets mixed up. Products, which can be photographed / scanned via your phone, let people ‘like’ them. Real and digital collide.
Installations and events will also take off. Chris Vernon of Saatchi & Saatchi thinks, “As cuts consume publicly funded art projects, there is great scope for advertisers to become patrons of the arts during the age of austerity” but it’ll be creative thinking rather than the latest technology that cuts through.
Everything is a Remix is a four part series that investigates the history of remixing, sampling, stealing, transforming and how that mentality spills out of the context of music and into a wider mindset. You could even say that everything is a remix.
Unless you’ve been living under a rock you might’ve noticed that a bloke called Raoul Moat has been running around trying to hide from the Police – after shooting his ex-girlfriend and her new boyfriend. It’s a pretty dark tale of a ex-doorman who went off the rails massively.
However it’s been a pretty weird tale of how the “media” has covered it and handled it. I’m only touching on some of it here.
The Twitter community got on the case . @raoulontherun was set-up with the bio “2010 Hide and Seek Champion“. Twitter suspended the account really quickly. Apparently it’s ok to mock the USA’s worst ever ecological disaster but not a murderer on the run.
Anyway it all kicked off on Friday night when Raoul was spotted in Northumberland town of Rothbury. I picked it up about 7pm as Sky News and BBC News dispatched reporters and had gone to rolling news.
People were crying, Police armed response units were tearing around, the tension was palpable. However reporters were running around like lunatics sticking their microphones into the faces of people who were clearly incredibly upset. It was horrible.
I stuck with it for a bit – but in the end had to turn over. The “breaking reports” were meaningless. There was nothing to report because the situation was still unfolding and the Police didn’t want the media to get too close.
Repeatedly they were asked to stay inside their cars, or to move back, or to go indoors and stay away from the windows. They didn’t. The police even had to resort to this.
Anyway the madness didn’t stop there. England football legend and fellow troubled soul Paul Gascoigne turned up at the Police exclusion zone in Rothbruy claiming to be a friends of “Moatys”.
He had brought Raoul Moat a can of lager, some chicken, a mobile phone, dressing gown and a fishing rod. I shit you not.
He then got into a photographers car and did a radio interview on Real Radio Northeast.
Listen to the interview in full here. It’s pretty tragic.
Anyway, at around 1.15am Raoul Moat shot himself in the head. Possibly after being “tasered” by the Police.
Then this picture appeared on the BBC News homepage.
It’s like some sort of PhotoBomb! Massively inappropriate.
Looks like the ex-bodybuilder was the only one pumped up on steroids. Even the ITV News blurred this unfortunate gurn out on their main broadcast.
Can someone make sense of all this for me please? It’s all been very surreal.
I’m half expecting to see this in the newspapers on Monday morning: The Raoul Moat Memorial Edition “Cut Out and Keep” Party Mask.
Delia Derbyshire was born in Coventry, England, in 1937. Educated at Coventry Grammar School and Girton College, Cambridge, where she was awarded a degree in mathematics and music.
In 1959, on approaching Decca records, Delia was told that the company did not employ women in their recording studios, so she went to work for the UN in Geneva before returning to London to work for music publishers Boosey & Hawkes.
In 1960 Delia joined the BBC as a trainee studio manager. She excelled in this field, but when it became apparent that the fledgling Radiophonic Workshop was under the same operational umbrella, she asked for an attachment there – an unheard of request, but one which was, nonetheless, granted.
Delia remained ‘temporarily attached’ for years, regularly deputising for the Head, and influencing many of her trainee colleagues.
A recent Guardian article called her ‘the unsung heroine of British electronic music’, probably because of the way her infectious enthusiasm subtly cross-pollinated the minds of many creative people.
She had exploratory encounters with Paul McCartney, Karlheinz Stockhausen, George Martin, Pink Floyd, Brian Jones, Anthony Newley, Ringo Starr and Harry Nilsson.
Love this campaign site SVT – a Swedish TV iPhone app.
Everything on a single page – campaign idea, open letter to steve job, video product demo, selection of screenshots, pre-filled twitter message and feed, twingly and facebook feeds, live video streaming from Apple HQ (I’m assuming it’s spoofed), click “Ya” to announce approval if you’re Steve Jobs, “Ya” ticker and Youtube webcam “Ya” uploads.
Sounds like a lot but it works really well for this.
The kicker is that it turns out that the approval campaign is a PR stunt – the app was only submitted the day before it all broke. Not many people will know that. But Apple will.
Wonder if SVT might find themselves having a few ‘approval difficulties’ for this one.
Update: Apple has just released the following statement:
The SVT app was just submitted for App Store approval today. We look forward to reviewing it as part of the normal review process in hopes that it may soon join the more than 100,000 apps already on the App Store.”
Dom wrote this feature for the 1st anniversary of the rebranded Revolution magazine. This is a copy+paste of the expanded version posted on his blog.
It gives you a glimpse into a some of the projects I’ve worked on at glue, and the technologies we’re looking into at the moment.
Little did I know it at the time, but a project for Mars’ sponsorship of Euro 2006 was the catalyst for a new approach to personalised video content here at glue.
What we did was crude and simple: we allowed people to create a fan by choosing a head, body and hands. These individual assets existed as PNGs on the server and depending on what was chosen, a JPEG was created using ImageMagik. Thinking not too much more about it, we moved on to the next project.
A year later our Get The Message recruitment campaign for the Royal Navy was born:
We quickly realised that the audience likely to want to recruit weren’t exclusively those behind PC’s all day. In fact the bulk of them weren’t. For this audience the only real channel available at scale was mobile.
The problem was we’d become experts in interactive video using Flash, but Flash wasn’t (and broadly still isn’t) compatible with many handsets. The file format of choice was / is MPEG video so we needed to replicate the browser experience using it.
We scratched our heads and fairly quickly came round to the idea that if we can create individual JPEGs on the fly, stitching them together would create video. So that’s exactly what we did – this time combining ImageMagik with FFMPEG.
The video message is delivered as an SMS. The recipient downloads and watches the video, and also has the ability to respond direct on handset:
At the time this was a first and we all felt pretty happy and gave ourselves a slap on the back like only the ad industry can. But almost naively, and for a second time, we’d stumbled on the door to a much bigger opportunity:
Replicating the flash experience had fulfilled the requirements of this project, but we soon recognised that by automating motion graphics or 3D packages it’s immediately possible to generate video without creative limits.
Enter DYNAMIC VIDEO (a phrase we’ve banded about the agency for a few years now that REALLY needs a better name…)
Whilst traditional video is shot with a camera and broadcast, dynamic video allows for content to be generated specific to the person watching it, at the moment of viewing.
To help understand this complex concept, think about the gaming world where a game is produced but each game-play is unique to the actions of the game player. With dynamic video the same is now true for brand experiences.
Here’s one such example we created in 2008 for Bacardi using their existing endorsement of UK beatboxing champion Beardyman.
The project was initiated by the simple thought, ‘wouldn’t it be great if everyone could beatbox as well as Beardyman.’ And from there a project was born.
It’s a simple upload your face mechanic, using Kofi Annan here for the purposes of demo:
There’s all sorts of complex things going on under the bonnet.
There’s proprietary image recognition software interpreting the uploaded photo, identifying facial elements and stripping it out from its background (no need for manual intervention).
Then using 3DMax the video is generated by mapping the face texture onto existing wireframe animations.
This technique has 2 immediate benefits:
1. Visually pretty much anything is possible (at least anything that’s possible within motion graphics or 3d applications)
2. The generated file is the ubiquitous MPEG – enabling distribution across channels without the need to re-engineer
However the technique is fairly processor intensive – taking around 20 seconds per person to generate. This gives a through-put of 4,320 videos per processor per server per day. Whilst this is ok on a smallish campaign, the only real thing you can do for larger ones is throw more hardware at it which can be costly and only becomes viable once a client really values what is creatively being achieved.
The emergence of cloud computing farms and the rendering capacity these offer to an extent solves this issue, but it’s early days. These cloud farms not only offer scalable rendering capabilities, but with the proliferation of smaller devices in all our pockets, enable richer experiences to be created remotely and be viewed on device.
Another sector dabbling in using cloud farms in this way is the few virtual rendering games companies that have recently emerged, which negate the need for a console by rendering content virtually and bringing it into home via your broadband. (can our broadband really cope with realtime 1080p video content? Or is this partly the reason these services haven’t yet taken off). Definitely one to keep an eye on.
As is the recent emergence of open source video specific rendering farms like PandaStream.
Or potentially the answer is in not saving the generated video to file, but rather to dynamically construct the video within stream as done here:
It’s a neat solution, but the SDK means the production process is alien to existing skill sets in the short term.
So generally speaking it would be fair to say there’s lots of trial and error needed. And I can’t help but notice the aforementioned gaming industry is set on collision course with the digital industry – both attacking a similar problem but from different angles. This is a most exciting prospect. (Here’s the closest example of the two together I’ve seen to date).
In the mean time it would be great to think that the Adobes’ of this world, or maybe more likely the hardware guys of the world like nVidia or AMD move into this space and create a tool to ease the production process, but until they do these experiences will be built by ingenuity in combining niche technologies together to the needs of the project.
It therefore becomes apparent that to stay ahead of competitors R&D can’t be undervalued. The same goes for having the time and freedom to explore, trial and learn new technologies and techniques on paid for work. As we’ve testified here, bits of work that at the time may not seem like much, may in the future prove to be invaluable by re-emerging as a wholly different entity.
So collectively we (the industry) have come to a juncture where new creative opportunities exist. With this brings the need for internal re-education both on how we approach briefs conceptually, but also in how we approach capturing the assets in a new way that enables them to be manipulated with these techniques.
And with an eye on the future: glue recently ventured into the world of TV. I for one am really excited at the prospect of the day that the archaic TV broadcasting infrastructure is modernised and we can apply our digital know how onto the currently stagnant format. It defies belief that everything is still run from BetaMax. Admittedly I don’t know the setup intimately, but I’d have thought all it needs is for systems to be driven by an internet enabled computer – which happens on occasion, but not enough.
Here’s another more dynamic example that the clever boys and girls at MiniVegas were able to negotiate for a special short term deal for S4C a few years ago: