Tuesday, November 5, 2013

Why I'm heading back to the domain

I'm taking it back to the domain, christopherberry.ca

There are a few reasons.

Maybe you can relate.

I started using Blogger in 2008. I already had a blog at christopherberry.ca - which contained a lot material that was tangential to marketing analytics.

I wanted a single place to focus on analytics. I started this Eyes on Analytics blog. I broke out that content on purpose. That's what we did back then.

Pick a topic for your blog, and stick to it.

Back to the Past

Having your own domain sounds like an ancient idea.

I was catching up on video from the RealtimeConf and caught Amber Case give a great talk. She talked about farming your content on your own domain, on your own software, and then syndicating that content outwards.

That just makes a crazy amount of sense to me.

If it's on my domain, it's my property. And it's my property to manage.

Centralize the OC and do it.

It could be asked, 'why have a blog where only a half dozen people see it'? Well, it's a great way to share with a half dozen people.

In Conclusion

So, for those reasons, I'm heading on back to the domain.

I'll test it. And let you know how it goes.

***

I'm Christopher Berry
More Blogging at christopherberry.ca

Monday, September 30, 2013

Analytics on the second screen, now that cards have won

It's official. Cards have officially won!

It poses challenges for dashboards.

Prior to 2013, depending on who you asked, the primary constraint was the standard piece of letter sized paper. For anything to be considered 'executive', all the information had to sit on one side of that standard letter sized paper. And my, how we crammed it all in.

Eight point print reigned - readability be damned.

There were some among us that dared to push out further. The introduction of the legal sized paper dashboard certainly drew heckles from the back. A much younger version of myself once dared to print dashboards on tabloid sized paper. That was a bridge to far I'm afraid. Outrageous, even.

But now it's 2013, and it looks like Bring Your Own Device (BYOD) is here to stay.

And Cards have won.

So let's adapt and thrive.

For a single metric, on a single card: communicate the current state, the rate of change, and the trend.

Consider the three images below:




What happens when we want to compare two metrics? In other words, what if we're curious about the association between two aggregate metrics?

We don't always have the width to display relationships in the typical columnar way we do with a huge screen. (And pinching and zooming isn't an acceptable solution!)

The real constraint is legibility.


If nobody had to actually read, or use, the interface, it wouldn't be so much of a problem. We could check off the box and be done with it.

But assume that somebody would use the device to rapidly transverse a data set, and, they needed to actually see and read information from that screen? What then?

Because the Card design pattern has been deemed the winner in responsive design, the question of analytical information architecture (nee dashboarding) becomes an awfully lot more interesting.

Feel the constraint for yourself. Try it out.

***

I'm Christopher Berry
@cjpberry







Monday, September 23, 2013

How to focus on explaining a fact

Maybe you've watched people use an information visualization interface.

If you've ever watched somebody bring up the report screen in any game, or pair-programmed them with them at their desk, you'd notice that for most people are looking to dive in, get a number, and jump right back out.

And they're happy.

But you've also watched people react to information that is unexpected [1]. And sometimes they say 'oh'. And that's when they start diving in deeper. And that surfaces yet more results that are unexpected.

The individual dives deeper and deeper, trying to diagnose the problem or explain an unexpected jolt in performance.

That experience usually ends in dissatisfaction, either with how hard it was to diagnose the problem, or what they have to do about it.

Experiences that confuse are hard. And they tend to be enraging.

More people would feel less rage if they were more focused.

Here's how:

1. On a second screen, or on a notepad, start a brand new page; date it.
2. Write the down the query you are looking to answer.
3. Navigate to the location(s) in the interface, or run the query, that yields the answer.
4. Take a screengrab, download the sheet, or record the answer on your notepad.
5. If the answer is what you expected, you're done, cross it off and move on to the next item; if the answer seems strange in any way, go to step 6.
6. Write down two or three reasons about why the result is not as expected.
7. Go to step 3, and repeat 4, 5, and 6, recursively, until you have discovered all the causal reasons related to your original question.

Recursion is a powerful tool so long as you stay organized as you plumb the depths of each rabbit hole.

The key is to record which section of the tree you're investigating. Sometimes all of your queries converge onto one causal factor. Sometimes the causes are mufti-faceted and complex.

This is the fastest way I know to stay focused and organized when navigating through an interface with over 500,000 metrics and reasons for anything. It'll work for you.

TL;DR: Recursion through a causal tree that you keep organized on a notepad.

***

[1] Expectations may be formed through heuristics, time series comparison, eyeballing the data, or, the proper way of comparing calculations from a model against the measurement provided by the instrumentation. I contend, with respect to digital analytics, that 99% of the population uses heuristics to form their expectations. Digital analytics is not engineering. Not yet. Dissatisfaction in response to expectations is a major driver of management heuristics; we might as well use it.

I'm Christopher Berry
@cjpberry

Monday, August 26, 2013

Episodic Change versus Continuous Improvement

This post delineates the difference between episodic change and continuous improvement, and its implications for digital analytics and the adeherents of optimization.

Episodic Change

For some, change comes all at once and is traumatic.

Think about any website redesign. Or, even worse, think about the home page redesign all by itself.

The term 'redesign' is a loaded one. A redesign is not mere incrementalism. Usually, the website hasn't been significantly modified for such a long time that it begins to appear 'dated'. Think about the use of Frames in the 1997-1999 period. Or the collapse of NN that necessitated redesigns for IE. Think about the redesigns that had to happen in 2005 and 2006 because CSS looked like it was here to stay. Or, the post Great Recession redesigns of 2011 to 2012. Many of these are driven by technological change, amortization schedules, but also by acuteness of the dissatisfaction with the status quo. Never underestimate the power of inertia.

Redesigns are traumatic because of the forces at work. Change involves creating discomfort. It involves messing with things. An object in motion tends to stay in motion unless counteracted by an equal and opposite force.

Some home pages look like they're org charts because they are org charts. Would you be surprised to know that, entire, whole websites, are organized around the org chart? Oh, they exist. And they're much more common than you think.

The idea that websites should be organized around the user, the customer, or the primary persona, shouldn't be revolutionary. And, it's the ultimate testament to a digital team if, at the end of a redesign, the website is organized around the consumer/citizen, as opposed to the institution.

The folks who get digital go to war against those that don't. And, in organizational battles that resemble a scene from Game of Thrones, the forces of user-centric design clash with business-unit priorities. Departments are punctured with arrows and casualties litter the battlefield.

That's an episode.

And no wonder digital veterans only wish to play these episodes once every two to four years. It's exhausting. Then there's the maiming. In many ways, change is something that is to be endured. These scars, if you've been through many of them, stick with you. And they stick to the people who form digital institutions.

There are entire business models built on episodic change.

Continuous Improvement

Continuous improvement is sometimes talked about under the banner of 'optimization'. It is supposed to be a sequence of progressive hypothesis testing that gradually culminates in improved business results. It is frequently misunderstood as testing 41 shades of blue.

Thanks to new technology, continuous improvement is becoming less traumatic. It's becoming easier, technically, to do. Continuous improvement is supposed to be routine. The approvals process is ideally flat and extremely simple.

Good continuous improvement comes packaged along with a data dictionary. Included in there are the mental models and a history of the tests that were executed over time. Some departments have a theory of marketing that enables groups of managers and directors to collaborate. If differences exist, these are documented and they become part of the body of knowledge. That body of knowledge, or expertise, is better because it's evidence-based, as opposed to being purely experience-based[1].

The really interesting part is when original consumer insights about who people are mix with what they are doing digitally. The difference between randomly changing the color or layout of a page in the dark, and accruing actual knowledge, is having at least a model or a theory about why people are what they are and why they do what they do.

Continuous improvement is not exhausting. It is not a nine month Game-of-Thrones episode. It is not a soap opera.

Continuous improvement is a sustainable system of activities. One does not have to replace their digital team after it because they're all burned out and jaded.

Optimization

Many of those in digital are producing disposable artifacts: The microsite. The banner. The Advergame. The campaign.

There is no technical debt to manage and there is no optimization to be had.

Version 1.0 goes into the market, it persists in the market for a blink of an eye, and then it's dead.

The implication for optimization is that in this part of the economy, there is no version 1.0.1 to optimize.

The assets themselves are episodic and there is very little appetite for longitudinal campaign analysis.

Episodic artifacts are episodic.

And, to an extent, these episodes leave some cultural residue behind. It's hard to see the mundane if you've only ever engaged in digital operas.

Some software, like eCommerce platforms, need not undergo episodic change as often[2]. They are candidates for continuous improvement.

In other words, for those making the transition from episodic sectors of the economy to managing systems that persist, continuous improvement can occur.

These are systems for which a system of continuous improvement, or optimization, is very appropriate.

----

I'm Christopher Berry
@cjpberry

[1] For instance: "Internal Search Engine Results Page which incorporated collaborative filtering generated a 44% lower site abandonment ratio than those that relied on content filtering alone." is an evidence-based statement. "I remember making changes to the Internal Search Engine Results page and I wasn't happy with the results" is purely an experience-based argument.

[2] Whether or not a complete redesign is required depends on the tempo of iteration. One test a year for three years wouldn't be enough.

Wednesday, August 14, 2013

Should we talk about sociometric influence again?

There's one particular firm that has reduced influence down to one single number.

A single, one size fits all, simple, number.
  • Some people use that number to augment their ego. 
  • Some make it a game.
  • Some dismiss it.
It's simple.

And simple wins.

And it's in part pretty much chilled most of the serious discussion we have in, in public, about segmenting populations by the impact they have on the demand curve.

The other part is the hype cycle.

Sociometric influence

The relationship between post frequency, post content, and reach is relatively well understood. Posting consistently and frequently about the same topic causes incremental reach to accrue over time. And, content, applied frequently to an audience (reach), is measurable on many social networks. It's less visible in email networks, forums, blog posts, reviews, and on Facebook, but, it's certainly there. It's observable by somebody. In the general, the relationship is known, and it's not uncommon for different people to have coefficients on that strength.

The relationship between post frequency, post content, and reach, against causing the demand curve  to shift, is not as well understood. There have been statements made about differences-in-differences effects, but there has been fewer data points on how the demand curve is affected.

The relationship between the social structure of a network, and the diffusion of a product through it, over time, is not nearly as well understood as the other two. There are a few practices that have access to that underlining data. There are fewer commercial groups that are actively hacking the graph.

If we understand these relationships, then we can improve them.

People are so much more than a bundle of transaction records. They're actual people. And it's worth understanding people through the lens of how they cause changes in each other.

The value of a single number

A single number wins every time because it's easy.

Simplicity wins every time.

You can create an ordered list, an ordered segment, of people who can predictably cause the demand curve to shift.

But for that abstraction to be an effective diagnostic tool, for it to be more useful, it has to decompose into parts. It's components, ideally, should be independent of one another. And that decomposition should have an intuition that drives it.

Should we talk about it?

When even the taxi cab driver tells you that social analytics sucks, you know that you're deep in the trough.

There is one number that dominates it all. And it's proprietary. It doesn't decompose.

I don't think that that number actually means what people want it to mean.

At what point do we build the foundations of linking actual sociometric influence back to the demand curve, and, expressing it in such a way that it is accessible?

And, moreover, what alternative do we have to the proprietary black box?

Is it time to start talking about it again?

***

I'm Christopher Berry

Monday, July 29, 2013

Regression to the Meme: The triumph of low cognition content in the attention economy

At time of writing, Reddit reaches approximately 25 million Americans. The algorithm used to curate content has generated some data received some study. We've known for a long time that experiences that demand less cognition are more usable. This is the insight that propels Twitter, Vine, Instagram, Netflix - among many others.

As the size of a community grows, there is a tendency for the volume of low effort content to increase. Put more technically, as the average audience grows, curators tend contribute an increasing volume of low effort content. There's a feedback loop between the audience, the algorithm, and content curators [1].

That process can be called Regression to the Meme.

Regression to the Meme isn't really all that new. It's the social extension of the same natural process that went into television development for decades.

There's a concept from broadcast television called Jolts Per Minute, or JPM's [2]. It's a measurement of how many laughs or surprises that are delivered in a minute of media. The rule of thumb was that bigger JPM's generated bigger audiences. 

The image macro, or meme, is remarkably easy to consume. On some devices, twenty memes can be consumed in a single minute. Puns are also very easy to consume. Extremely short pieces of content, demanding little cognitive load, can produce very high Jolts Per Minute, especially when they're stringed together in a thread.

Low cognition content isn't necessarily low effort, either. Where there's a game there will always be gamers. And quite a few people have worked very hard to produce brilliantly low cognition content.

In sum, attention is a finite natural resource. People have many options for how they're going to spend their time. Large audiences prefer not to think. They want high Jolts Per Minute. That doesn't mean that all content is destined to regress to the meme. The long tail gets longer every day.

Regression to the Meme can be understood, and it can be hacked.

***

I'm Christopher Berry
@cjpberry


[1] This effect has been observed when software and consumer product reviewers adjust their subsequent reviews of a product based on consumer feedback. There are feedback loops everywhere!
[2] At the risk of carbon dating myself - I learned the term 'Jolts Per Minute' in the nineties. I don't know if it's still used and I couldn't find a substitute.

Monday, July 15, 2013

The Performance Report and The Insight

The Performance Report is the wrong vehicle to communicate an Insight.

The Performance Report 

Performance Reports periodically report the same set of Key Performance Indicators. The date on the calendar determines the cadence.

The best example of a Performance Report is below. It's from the last page of The Economist magazine. It is entitled "Output, prices and jobs".


It contains four (4) Key Performance Indicators. It is enumerated across 6 major thematic regions and a supplementary area called 'More Countries'. The KPI's are GDP (output), Industrial Production (output), Consumer Prices (prices), and Unemployment Rate (jobs). Only two indicators are broken out for trends - GDP and Consumer Prices. It has, in all, 10 columns of data. It contains notes about when the most recent piece of information came out.

This is a great periodical. It's been the same for as long as I can remember. It's scalable. It's been sustained. And it's been great at generating situational awareness.

It is has not undergone vanity metric inflation. The head of the United States hasn't insisted that Total Employed be added to bolster her specific business argument. China isn't arguing for their own vanity metric to get pushed to it. It's stable and practical.

The periodic performance report generates situational awareness. No more. No less.

The Economist has done a great job in keeping this table clean and consistent. It's a great resource, and, it's a great example of what I believe that situational awareness should be.

It is the wrong place to wedge in an Insight or two.

The Insight

The Economist magazine has a place for insights: it's the ~70 odd pages that precede the table in the back.

True insights are harder to mine. Merely staring at the table and writing two or three observations about it isn't an insight. Those are observations. That might be highlighting areas of interest. But that isn't an insight.

Quantifying and explaining the drivers of economic performance is valuable. The activity of probing and investigating yields insights. The act of producing the table is important. But it shouldn't be confused with insight generation.

The act of discovering an insight, and deciding if a given investigation is in the Firm's interest to know, is a very different activity.

It involves probing cause and effect. It also mandates that the underlining data be extracted, transformed, and loaded in a very different way from the needs of reporting periodical. Typically, time series data, and context about that time series, is required to discover good insights. In other words, the procedure for insight discovery, right down the roots of the data model, are different from situational reportage.

I think that when we try to jam the two formats together into the same artifact, we do real harm to ourselves. The Economist has a massive team of journalists and economists to report the world. Most firms only have a handful of people to handle, arguably, a lot more information than The Economist has access to.

The way we structure our system of activities should recognize and respect the differences between these two artifacts.

***

I'm Christopher Berry.
Definition of an insight.

Thursday, June 27, 2013

An analytical perspective of Paid, Owned, and Earned media tactics

When does 1 + 1 + 1 = 4?

No, not when you fat finger a spreadsheet.

It's when Paid, Owned and Earned tactics are overlaid.

Heresy? Read on.

Paid, Owned, Earned

There are competing sets of definitions of what constitutes paid, owned, and earned media.

To over-simplify:

Paid media: interruptive media, media that is inserted into an artifact created by the creative class, for some form of promotional consideration.

Owned media: non-interruptive media, media that is created by the firm. Media that is intended to always align with the content strategy and the brand key[1].

Earned media: interruptive or non-interruptive media, media created by individuals not affiliated with the firm and without promotional consideration. Media that is not necessarily aligned with the content strategy and brand key.

Can you give me a few examples?

June has a website filled with content that she wants prospective customers to see (owned media). People rarely visit the website spontaneously, so she takes out a Google Adword campaign (paid media) to drive people to her website[2].

No controversy so far.

June has a website filled with content that she only wants those with signing authority to see (the website is owned media). She takes out a Facebook Ad campaign (the ad is paid media) to only target those likely to have signing authority. She also posts a link to a white paper to the Business Intelligence LinkedIn group (the white paper is owned media). Joey sees the post via LinkedIn, and shares it with his newsfeed (his action is earned media).

It's easy to get bogged down in the fact that Facebook is a social network with an ad platform in it. It's also easy to forget that a piece of shareable content is, unto itself, a piece of owned media. June can't control what Joey says about her whitepaper, and that's earned media.

Let's not get bogged down though. Reasonable people can categorize different tactics differently.

Earned Media in the Mix

Earned media offers a range of tactics that can be mixed with existing content and paid strategies. There isn't anything truly revolutionary about it. It just takes a little bit more effort  and creativity and a little bit less cynicism to make it work.

Marketers have long had the intuition that they can't possibly pay the full price for customer acquisition. Many of the best direct marketers inherently understand this fact too. We've understood that when people talk, when they endorse, when they recommend, really great results follow. What's different about the social technology, today, is that earned media can spread far faster and wider than before[3].

Layering in earned media tactics as part of a total strategy is far more likely to generate observable reinforcing effects than using each channel in isolation. An enhanced return isn't guaranteed. But the odds increase in ones favor. It's one way to get 1 + 1 + 1 to equal 4.

Optimization versus Attribution

How can you optimize for so many reinforcing effects?

Consider that the best content can't take advantage of the amplified newsfeeds if it isn't shareable. Consider that content won't get shared if it isn't targeted at relevant people. Consider that it's unlikely for anything to become spontaneously viral.

When things to right, it all goes right together. Conversely, when one thing goes wrong, like a disastrous content strategy, the whole thing goes wrong.

I recommend a comparative approach, with the intent of optimizing the whole system, as opposed to component approach, with the intent of attributing relative impact. Focusing on optimizing the entire system, together, is likely to generate greater incremental returns than fast-failing each component in-situ and without regard to each other.

Importantly, an optimization approach is more likely to generate better institutional outcomes over the medium and long runs. The downside is that it's far easier to operate in silos, so there are significant short run risks.

The analytical perspective of mixing paid, owned, and earned media tactics is that it can be tough, but it doesn't have to be. The reinforcing effects, explained briefly in this post, is the mechanism by which incremental value is derived.

***

I'm Christopher Berry

[1] The ability for a firm to stay aligned to content strategy and brand key is a function of effectiveness. The intent of those creating owned media is continuous alignment with the brand key.
[2] A search engine is certainly an artifact created by the creative class. Remember when less than 50% of a Search Engine Results Page (SERP) was covered in ads? (Pepperidge Farms remembers.)
[3] Earned tactics directly impact the coefficient q in the Bass Model. Most marketers intuitively understand this, but they don't use the term 'q' to describe it.


Monday, June 10, 2013

We'll go back to simply calling it data soon enough

A researcher at Gartner says, that by 2016, we'll go back to calling it Data again. The Big Data hype will be over. Done.

There's very good reason to believe that.

Three Worlds

  • In the popular world, Big Data is generally taken to mean the new data streams that are growing, in addition to the suite of technologies that make its use better.
  • In the more technical world, Big Data is taken to mean any set that is too big to be loaded onto a single computer.
  • In the more digital analytical world, Big Data is taken to mean anything that is too big to fit into a spreadsheet.


Business Intelligence

Information management, for the past 60 years, was business centric and very transactional in nature. We call it Business Intelligence for a reason. So naturally, when consumers started generating data on their own terms, traditional business intelligence approaches were going to have a rough time.

Social data streams, and the proliferation of behavioral data streams, came at a time when BI was well into the harvesting mode. The big investments made in the nineties needed to get paid off. Ivory backscratchers and blow for everybody!

We can safely assert that the 2000's were a sleepy time in BI. A few text mining technologies were dusted off from the eighties. We saw that. A few weak flirtations with log file data (!?) in and around 2004. And then disruption.

The Apache foundation released a few technologies just as cloud computing was becoming accessible. And then came the era of the app, rendered palatable to the masses because these little bundles of software were perfect for mobile devices with small screens.

Convergence, just not the way we expect it

These new data streams will, at some point, find its place within the traditional BI stack.

And it'll in the form of a checkbox.

Yup, we got social data in there. Yup, we got behavioral data in there. Check and Check.

Now stop asking us about it.

And when that happens, Big Data will just go back to being Data. The revolution will be over people. You don't have to go home but you can't stay here. And, from what I gather, this is the cheapest way to do it.

There is a second way.

The alliance between data scientists, those that data into product, and consumers, those that are generating the data, is so far pretty strong. Bit.ly, Netflix, and LinkedIn are really visible examples of that. Folding data deeper into the user experience, as a feedback loop, as opposed to an output stream, is the differentiating activity.

At some point this is going to stop being novel, and once again, it'll just go back to being Data.

And then our revisionist historians will deny that there was ever anything truly disruptive about the new data streams. That is was all just an evolution. That's all.

But we know that it isn't really true. Right?

***

I'm Christopher Berry.
@cjpberry

Friday, May 31, 2013

Categorizing Content against Categorizing Audiences

There are two big chunks of data layered upon digital analytics. These are content categorization, and audience categorization. The management of how these features are extracted from the underlining data, and how they're represented, is at the core of 80% of the problems and opportunities in digital analytics.

Content Categorization

The root unit of traditional digital analytics is the pageload. The page has features.

Its creator might intend the page to be a landing page. Or a transactional page. Or a product page. Or a search results page. And there are a dozen other features that a page might have. It might have a nav bar. It might not. It might have a footer. It might not. It might have rich media embedded in it. It might not. Content has features. You get my point. Features about a page are the basis for categorizing that page.

It might contain a link to a specific page. It might not. It might belong to a specific department - like hardware or sports - or it might not. You get my point. Pages have features. And a page may belong to multiple categories.

There are units inside the page, divs, which can be engaged with. Divs have features too. That's an exponential increase in metadata, and that revolution is already happening. (You're just not hearing much about it). Divs have features too. They have positions on the page. Some are only visible on mobile devices. Some contain text. Some are buttons. Again, you get my point. Divs, which contain elements that may respond to a user input, have features too.

Audience Categorization

The audience has features too. The dominant feature used to be returning visitor, or put another way, "does the browser that this person is using have a cookie on it that enables us to know the last time we saw this specific browser on this website." Tremendous handwringing goes into the returning visit feature. But there are plenty of others.

Is the visitor using a mobile device? Is the visitor likely to be a robot? Where is the browser? What is the browser? What browser are they using? What is their screen resolution? Have they been a customer before? How much did they buy? What company are they visiting from? What is their name? What is their title? You get the idea.

Sometimes you'll hear audiences referred to as 'traffic'. That's rather unfortunate. However, audiences may be categorized based on the origin of their visit.

Audience categorization is not market segmentation. It really isn't. The purpose of market segmentation is always to identify a group of self-referential people for the purposes of driving more efficient and effective marketing by way of the application of rules against those people. Audience categorization, as it is broadly practiced today, is entirely an observational activity.

Analytics at the Intersection

Alright, so if you know the features of an individual and you have the features of the pages, the combination of those two datasets can be, in some cases, expressed in a crosstab.

(Those are random numbers - don't even try to extract meaning from them.)


Alright, well, so what? What's the problem?

1. The data about page features is frequently embedded directly in the tag itself

There was a legacy decision, stemming from 1995, that page categorization was to be recorded upon pageload. It's embedded right into the javascript on the page. The problem with that is that the analyst often has no control over the copying and pasting of the code. And transposition errors are absolutely endemic. The industry, finally, starting around 2008, responded to the problem with tag management systems. It's embarrassing that it took so long to separate feature extraction from measurement.

But progress is finally getting made.

2. Cookie deletion

Some people make a habit deleting their cookies. Audience categorization is less accurate as a result.

3. Device proliferation

People engage in different behaviors on different devices....and the cookie doesn't follow them from their work computer, to their smartphone, to their tablet. If you're still assuming that a cookie represents a distinct person, you're assuming wrong.

What To Do?

There has got to be a better way to manage page categorization. Tag management is one piece of the puzzle. There are huge opportunities in page categorization as a source of additional dimensions for actionable analysis. The linkage back to content strategy is one possible source.

And, ultimately, there has got to be a better way to understand audiences beyond traffic categorization and whether or not we've previously seen this browser. Ultimately, permission has to be part of that mix, but it is doubtful that we ever approach 100% precision and certain that we will never achieve 100% accuracy.

These two functions, content and audience categorization, are at the core of most of the headaches. They represent tremendous opportunity for sustainable competitive advantage if they can be cracked.

***

I'm Christopher Berry.
Thanks for reading.

Monday, May 6, 2013

Tools favor the pageview, so thought favors the pageview

Tools favor the pageview, so thought favors the pageview.

An Example:

You visit a website, either by clicking on a link or directly typing the site out. A page loads. You're greeted with a massive flash/javascript carousel. You click on it to spin the carousel. You see something that's interesting. You click on that. It sends you to that product detail page. You click add to cart. The cart total, in the upper right hand corner, is animated with an updated total, and a checkout button appears. You click the checkout button. The checkout summary page loads. You see the shipping cost. You click the Google button in your toolbar, abandoning the page.

A lot happened there.
  • Loading the first page causes a pageview. 
  • Clicking on a carousel, if tagged properly by analytics software, is an event that belongs to that first page. 
  • Clicking on any item causes a second pageview. 
  • Clicking on the 'add to cart' button, if tagged properly by analytics software, is an event that belongs to that second page.
  • Clicking on 'checkout' causes a third pageview.
And that's the last that most software sees of you.

Most events are not tagged because most software does not tag them automatically.

And, when those events are tagged, their treated in varying ways by different software. Events have always been, somewhat, second class citizens compared to pageviews.

Why are Events Second Class Citizens?

The same phenomenon that's responsible for the QWERTY keyboard. It's lock-in.

In 1993, before Mocha, before Applets, the way people used the web was centered, entirely, on moving from page to page. And that's how the first web analytics tools were set up to understand. In 1995, there was a split between those firms that relied on server web logs and those firms that relied on Mocha/Javascript. This had hillarious consequences on the way people think and talk.

For a very long time, since the first applet in 1995, through to Macromedia Flash, to ancient AJAX, and now frameworks like jquery, emberjs, angularjs, (and others), we've been inventing experiences that do not cause a page load. It's possible to have an entire experience without causing a second page load.

These technologies enable experiences that are very rich in events. And, those events are not all automatically captured, processed, and understood by web analytics softare or the people that use them.

Events, those that occur after or before another page load, have been somewhat troublesome.

And that's in part because the technology was built on a different foundation, and it's locked in on that foundation. And, it's in part because there's been quite a bit of trouble in recording the context in which an event is situated. At an even deeper level, it's not as though most of these frameworks have been written with recording the context of an event in mind.

Record All The Things!

A new generation of digital analytics, enabled by big data (in the cloud!), simply records the entire session, every single click, swipe, and scroll, and passes it back to a server for storage. And then some poor person has to make sense of it all with little context.

The context, however, remains a problem. And sadly, for both the reader and for all of us, there is no handy way of farming context. Not yet.

Where we're at

So, as a result of the page load hogging all the glory as top dog in many of the tools we use, many digital analysts think in terms of the pageview.


Tuesday, April 23, 2013

Lessons Learned: Jitter

I included a picture sort of like this one in an early version of a work in progress. Check it out.



It's really not immediately obvious what you're looking at, so it's a fail. It's full of fail. 

It's a fail because the placement of dots, horizontally, contains some actual information, but not as much information as you might believe. It breaks the way you're used to looking at a scatter plot. And that can be really confusing, or even enraging. 

What you're seeing are two blox-plots depicting a group of people who have never viewed a particular page, and another group that has viewed a particular page. The Y-axis is the number of comments that each individual left. The fact that a person belongs to a particular segment, a particular factor, on the X-axis, is the critical detail. The actual placement, horizontally, is random.

What's the purpose? 

It's threefold. First, it's communicating a stark difference between the two groups, and making a strong editorial statement that factor X is driving Y. Second, you're able to rapidly see the distribution of activity within one of the groups. And third, you can spot the outliers quickly.

Aside from being in one group (1), or the other (NA), there's no other meaning to the X-axis placement. It's randomized, jittered, so that you can see the density of people. It's like looking at a really detailed frequency table on its side. Or a really bad population pyramid.

I was thinking that these are a faster way of summarizing the data, to an interested audience, than a contingency table or the summarized final product. 

Nope. You gain no efficiency.

This particular visualization is be really good for communicating to your cohorts about possible segments to explore. It takes all of 40 second to snap a picture, draw a few boxes and arrows, and shoot off an email. I don't think that it's ready for the analytics stage quite yet.

Lesson learned.

***

I'm Christopher Berry

Friday, April 12, 2013

Matching the right data experience with the right user goal

You use data because:
  • You want to know what's going on. (Jargon: Situational Awareness)
  • You want to know what's likely to happen in the future. (Jargon: Predictive Analytics)
  • You are dissatisfied with the current state and are searching for the cause or causes. (Jargon: Discovery, Research, Witch-hunting)
  • You are dissatisfied with the current state are searching for alternatives. (Jargon: Search).
  • You want to make up your mind. (Jargon: Decision Management / Decision Support)
  • You have made up your mind and now you want to build a business case; you want to use facts to kneecap an opponent, or generally seek to persuade or influence people. (Jargon: Rhetoric, Convenient Reasoning, Making a Business Case)
  • You want to defray the risks of making a bad decision, you're looking where you leap. (Jargon: Risk Management)
  • You want to make the best decision. (Jargon: Optimization)
  • You don't want to think, I just want the system to respond its environment, learn, personalize, and optimize. (Jargon: Decision Automation, Machine Learning)
These are all admirable goals /wants.

I can think of four delivery mechanisms, or service experiences, about how we consume data. 

Four delivery mechanisms

1. Static

A non-interactive artifact, like a photo (PNG, JPEG, TIFF, .... GIF(!?!)).

2. Interactive

An interactive artifact with an interface that responds when a user prompts it to respond. Like Google Analytics, SaaS, Rstudio, recommendation engines, or Excel.

3. Human Audio-Visual

A human being talking about data and maybe even using a static or interactive (!) artifact to respond to the consumer in....ummmmmmm.....ummmmmmm.... real time.

4. Stream

An ongoing, updating list, of either static or interactive artifacts, delivered either visually or by way of an API, that generally responds regardless if it was prompted.

0. Other

(I don't know what I don't know. Something with Glass.)

Matching the right experience with the right use case

It's a really, really good idea, in the rush to grab modelling tools and new data visualization frameworks, to consider whether Polychart, WEKA, or a full on Ruby on Rails experience, matches the desired outcome.

That's obvious, so what?

There's a general mismatch out there between what people think they're buying and what people are getting.

1. Hear about Big Data.
2. Collect Big Data.
3. Store Big Data.
4. ????
5. Profit!

At the root is either confusion, or a mismatch, about what each technology does well, and what it doesn't do well.


***







Saturday, March 30, 2013

Deep Learning is sort of like Ethnographic Research

I see a parallel between quantitative Deep Learning methods and qualitative Ethnographic Research. Maybe you see it too?

Ethnographic Research

Ethnographic research is a qualitative research method centered on understanding what drives a culture. It involves researchers going out into the field and spending a long time making observations about people and their environments. It's about extracting deep meanings about groups of people. Academic ethnographic research is quite involved, frequently taking years to complete, and, is usually done by anthropologists.

Commercial ethnographic research is a bit different. Academics seek universal truth. Commercial research seek design truth. A quote from Conifer, a firm that does commercial ethnographic research, themselves: "An ethnographic approach to design research can provide a holistic understanding of users, their routines, motivations and beliefs that extend beyond the original research intent." Sounds good doesn't it?

And then there's something that somebody made up, calling making 4 house calls in 4 hours 'ethnographic research methods'. It's happens too often. And it isn't 'savvy'. It's fraud.

Commercial qualitative research that employs methods drawn from ethnographic research can be incredibly valuable and yield actual insights. (novel knowledge that wasn't known before that causes a decision to be made that wouldn't have otherwise been made, causing a better result.)

Deep Learning

Why is Deep Learning the best keyword? It's the newest!

Deep Learning is a quantitative method, drawn from Artificial Intelligence and now a accepted as a full member of the Machine Learning family of methods. It uses neural networks to extract understanding about the relationships among variables within a dataset and to make accurate predictions about the future.

Deep Learning imitates the human brain. It can also belong to a class of unsupervised learning methods, which is also very hot right now. And, you might be hearing a lot about computer vision, owing in part to Google's interest in processing visual data from Glass. There's a good analog between human vision and the human brain; and computer vision and the computer brain.

It's new because some data scientists are like DJ's. They root around in the old records looking to dust off the great things, and update them with some modernity. I love that. You see farther when you stand on the shoulders of giants (Newton said it). Jeremy Howard is a pretty damn good DJ.

The New Yorker and The New York Times have written about it. The concept got a lot of attention at Strata, and, more innovators are openly talking about how to address the processing issues. There have been some pretty big moves in Toronto around people who know deep learning algorithms. There's a lot happening in a field that gets very little attention.

There's academic deep learning, which is intense and focused on very practical matters of scalability. It takes years to complete.

There's commercial deep learning. Much of the computer vision work that's going into Google Glass is centered around that.

And then there's stuff that claims to be deep learning, but isn't.

Counterfeiters exist. That's a pretty universal parallel.

The parallel

The real parallel is that just as you wouldn't do a commercial ethnographic study every month, because it doesn't scale, you wouldn't execute a commercial deep learning study every month, because it doesn't scale.

Deep learning algorithms, as of just yet, do not scale.

There does seem to be a pervasive belief that all quantitative methods scale beautifully, in ways that qualitative methods just can't. This is generally true. But not always.

You shouldn't be thinking of deep learning techniques as a monthly deep dive into nature. Not yet.

Assign them a place in your mind next to actual ethnographic research. That's where they belong for just now.



Tuesday, March 19, 2013

Communicating Data To Designers


Patrick Glinski, Idea Couture's head of service design, and Christopher Berry (yours truly), Authintic's chief scientist, are presenting "Communicating data to designers: the soft side of hard data", at the eMetrics conference in Toronto tomorrow.

This post isn't just a plug. There's value here.

Design at a marketing analytics conference

It's unusual to present ideas about design at an analytics conference. But it's during the design phase that hundreds of micro-decisions are made. Analytics is all about making better decisions. Designers aren't analysts. There's a gap there.

There's a lot to be gained by empathizing with designers. If analysts, and analytics managers, understand how designers work, they stand a much better chance of causing better outcomes. But it isn't enough to simply tell designers how to be better.

Designers fundamentally want to learn how to make design better. Acknowledging and working on communication style can make the difference between a relationship dominated by bitterness and alienation, and better outcomes.

Is it too soon to call ArtScience over?

You may recall, in 2009, a lot of discussion about the fusion of art and science in agency planning circles. Two of the principle writers at the time were James Shuttleworth and Michael Fassnacht. The spirit of the idea was to redefine strategy by harmonizing design methods and scientific methods.

There were attempts in 2010 around the idea. When the org charts came out and process flows were explained, both designers and analysts recoiled. It's rare that re-arranging boxes in an org chart ever fundamentally change the culture of a company, little though force empathy. Many people came out of 2010 a fair bit frustrated with the process.

In spite of the difficulties, the spirit of ArtScience approaches have taken root in a few places.

Patrick is a leader in service design. He approaches problems using both design thinking and analytical b-school thinking. It takes both systems thinking and design thinking to design new systems. And it works for him.

I'm good at data science product development. I approach problems using analytical b-school thinking and design thinking. It takes both to design effective products. And it works for me.

ArtScience might be poised for another set of attempts. In other fields of marketing, that future is already here. It's just distributed normally.

Call to action

If you're in Toronto and want to know more about marketing analytics, you should attend eMetrics. If you're at eMetrics, you should see their presentation.

Wednesday, March 13, 2013

Analytical responses to uncertainty

Sometimes it feels like the 1680's in analytics generally and data science more generally.

And that's exciting and terrifying.

There's a lot of knowledge built up about how to move aggregate public opinion. During the Mad Men era, we understood that we just needed to repeat the same talking point enough times to cause a majority of people to believe it. Repetition is truth. Popular truths are contagious. All of that concentrated attention meant concentrated changes in opinion.

We were trained in how to interpret and manipulate those aggregate figures, how to draw straight line regression between GRP's, Price, and Volume of product sold. That unique understanding is how entire generations of analysts-turned-executives justified their worth. Y = m1x1 + m2x2 + b. This wasn't hard. It still isn't hard. A large number of decisions are still justified using such equations. Or some heuristics that are drawn from them. We had certainty then.

We have growing uncertainty now.

Attention has been fragmenting for some time now. And that fragmentation is accelerating. The b in Y = m1x1 + m2x2 + b is inflating. The goodness of fit of these old laws has been breaking down. The predictability and replicability that we crave is breaking down. We want to reliably know that a dollar invested in this channel, today, will cause a positive return on that dollar, at some point in the future.

The rest of this post is about the responses to that uncertainty.

Some of the most advanced marketers understood this in the last decade, and responded by ramping up the amount of risk that would be tolerated by the organization. Risk in messaging was required to cause positive returns. If every brand played it safe, the entire firm was certain to fail. If all brands took risks, by virtue of the 80/20 or 95/5 rule, the entire firm is certain to thrive.

There's something just a bit terrifying about the different directions a whole bunch of groups have gone.

There's a school that argues, quite vehemently, that the equation didn't exist in the first place, effective frequency is a myth, and that it's entirely about the message. This is pretty much like flat earth theorists or creationism. The relationship between message, frequency, and sales is changing, it isn't the case that  they're unrelated.

There's a school that is overfitting the data. These are a unique class of marketing scientist / data scientist that are engaged in hyperoptimization. They're deriving equations in the format Y = mX1^1 + mX2^2 + mX3^3.....mXi^j.... . These are big massive equations. And they tend to make really accurate, short run, predictions. It's really easy to sell hyperoptimization, and people want to buy it. I think these are really important tactics, and really important work. I'm really concerned by overfitting the data and confusing a local phenomenon for a general natural law.

There's a school that is taking a step back from hyperoptimization and looking at the bits that go into a strategy, and how data interfaces with that strategy. They're still deriving equations, but there's an emphasis on generalizability at the expense of overfitting. They're looking at what drives the consumer, and balancing it with how the firm is differentiating itself. This approach is the most sustainable, but the hardest to communicate.

It's exciting and terrifying.


Friday, February 22, 2013

Why Analysts Should Know How To Code

I heard, something to the effect: "...analysts don't code, so these tools have wizards that help analysts model the data...and then these tools can create code, that is used by others..."

Damn. That's just not right. About the code.

I'll support that statement with experience and anecdote [1].

My first kick at learning SPSS, my first statistical package, involved getting on the receiving end of a Teaching Assistant angst. They were telling people how to use the menu pulldowns and wizards. Students could do real damage to their data set and not understand what they had done. I did. Worse, there was no way for them verify that what I had done was really good. It was a wondrous semester. I was building a statistical package for a software engineering course, and, I was learning how to use a commercial package. Brutal. But better for it.

I grew a little. When I became a TA, I taught syntax. Forget the wizards. Forget the whole pulling down menu items and repeating things ninety times like a quattro pro instructor. Syntax worked. I could see, at a glance, exactly where students had gone wrong. It made it easy for me to mark. It made it harder for them to get through the course. It was win-win.

When it came time to collaborate with my peers, and do real research, I kept on using syntax. By then, I had learned Python and Lisp, but I was about the only one who knew either. So, SPSS syntax was the lingua franca. I could write comments in the code. I could tell people what I was thinking when I was running a particular section of code. I could scale.

I grew a lot more. When I moved into the private sector, I kept on using SPSS and the syntax method. I used SPSS syntax to move mountains, quickly, reliably, and repeatedly. The results were accurate. I could estimate my work effort reliably and with increasing precision. It worked for me. I scaled.

When I grew into management, I could divide and conquer work using the code. I could collaborate with my lieutenants across time zones. I could scale what I knew across dozens of people, easily. I scaled.

And when I had the authority and legitimacy to lead and teach, I did so through code. I helped them to scale.

And now? I'm back to Python. I can scale Python.

I've dragged a whole bunch of bad habits with me from SPSS. I state many data structures really explicitly. I find it tough to adhere to DRY (Don't Repeat Yourself) principles. Data visualization is just as ugly in the standard Python libraries as it is in SPSS. I opt for loops over recursion. And I fret way too much about premature optimization. But, it's really helping me to scale.

Analysts should know code because code scales.

Growing is uncomfortable. But the benefits of scaling is worth it.



[1] The plural of anecdote isn't evidence. I'm relating the experience and the benefits.

***

I'm Christopher Berry.
Related to scaling: Do you really want real time analytics streams?

Monday, February 11, 2013

All code is debt and all data is inertia

All code is debt and all data is inertia.

When you buy analytics software, you're buying what it can do for you. You're not buying the code.

The code delivers value. You buy the value.

The code must be maintained. That code is like a promissory note.

Data is inertia. The more of it is there is, the harder it is to migrate. The more it is understood and habits formed around it, the harder it is to adjust. Stored data creates technological lock-in and institutional lock-in.

If it's measurable and known, it's manageable.

Why
"As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it." - Meir Manny Lehman.

Most efforts to produce more value will generate more complexity. The cost of that additional complexity can be managed. The discipline of software engineering is dedicated to reducing the cost of that code. And it makes a huge difference.

Paying down technical debt is an investment in a better future. And there's a small industry of operations researchers and software engineers that are investigating just how large that return is. There's not enough of them.

Compared to creating brand new data structures, migrating existing ones are pretty heavy exercises. A lot of things go wrong in data migration. And it's particularly tricky. The benefits of data unification are frequently too high to ignore. Data likes to stay where it is.

New technologies have come along that are designed to reduce this inertia and make them more slippery. And, data most certainly can be measured and managed.

The great thing about inertia is that once something is in motion, it stays in motion.
It's very hard to stop the momentum generated by data-driven operations or deeply embedded data. Once a firm becomes heavily data driven, it's very hard to stop it.

Inertia can be managed. Sometimes it isn't evident for a very long time that the force you've applied is having an effect.

It's manageable.

***

I'm Christopher Berry
I build recommendation engines at Authintic.

Thursday, January 24, 2013

Facebook announces conversion measurement

Facebook has just announced conversion measurement. That's great.

They're also heralding that the CPC's are 40% lower than competing channels. That's great too.

A cleaner linkage between click and conversion, within the Facebook Analytics Platform, is a win.

It's a step towards answering direct attribution ROI. And that's something that the digital analytics industry understands very well. We're well aware of the benefits. It's a good indicator of what's working and what isn't. It isn't perfect. But it's great to have.

Facebook, at least in 2008, talked to marketers in a very different language. They talked a lot about the graph and the impact of genuine connections. They moved off of that in the past year. They're using language that more digital marketers are used to. Direct attribution ROI is something that's well known. And it's something that I think a lot more people can succeed in optimizing and explaining.

There's so much more value in Facebook than direct attribution ROI, and a massive amount of untapped potential. That value will still be there long after the first flurry of direct attribution studies come out. Some will be positive. Some will be negative.

Facebook, by necessity, is selling what the market wants. And that's a pragmatic, great thing.

***

Hi! I'm Christopher Berry.
I'm much briefer on Twitter.

Friday, January 18, 2013

The role of information asymmetry in the consumer purchase journey

I'll sum up #NRF13, the NRF Annual Big Show, with the idea of information asymmetry.

Consumers behavior is smarter today than it was before.

Forget the debate about whether or not consumers are actually smarter themselves, the consequence of the technology they're using is that their behavior is smarter.

That's the consequence.

Consumers have been slowly eroding the comparative information advantage enjoyed by brands.  They've been using the Internet to research major purchases for a long time, using channels that are beyond the direct control of brands and retailers themselves.

It's also about where they're getting the information. No longer content to research at home, maybe print out a few pages, and bring it into the store to compare and buy, consumers are accessing the Internet on their smartphones and tablets, while in the store.

And then buying items for less online.

While in the store.

And data scientists are doing more with the data, building better applications that help consumers think less and find better products. We have a long way to go when it comes to making product reviews more reliable and intelligent. We have hundreds of datasets left to unify. We have a way to go.

Some retailers hate this. It seemingly forces them to compete on price alone, instead of competing along other dimensions. Or, more specifically, it may expose latent weakness along a traditional, eroding, competitive dimension.

Other retailers are concerned, but okay with this. Information asymmetry is a big issue if a given merchandise line are particularly susceptible to price competition. Information asymmetry will likely disproportionately reward brands that produce a superior product or service. That knowledge spreads faster, further, spurred on by technologies that have enabled more people than ever to share more information with each other, regardless of the level of appropriateness.

I'm interested in what you think. Drop me a line on twitter, @cjpberry , or in the comments below.

***

I'm Christopher Berry.
I build recommendation engines to address information asymmetry.

Monday, January 14, 2013

NRF Annual Conference 2013: mapping the consumer purchasing journey

Retailers are looking forward to the NRF Annual Conference, 2013.

Many of them are thinking about this question:
  • Is there a better way to conceptualize, record, and intercept touch-points (and map so-called moments of truth) in the consumer purchasing journey?
For a very long time, many (most) of us in digital analytics would use the term 'bricks and clicks' as an actual divide between digital experiences and retail experiences. Digital was confined to the desktop. And then the in-store kiosk (maybe). And we analyzed ecommerce flows in ways that varied a bit from practice to practice[1].

Most consumer experience analysts argued that there was only one consumer moving through multiple channels, and that the divide among channels was arbitrary. There was a real problem that retailers, on the ground, could see.

Until, that is, smartphone penetration made it visible.

And now that consumers are carrying computers that fit into their pockets, right into the store, and are completing purchases...right in the store...it's a visible problem.

The reason for caring about this question are rooted in the problem of showrooming.

There are a few promising presentations on the agenda:
  • Some are suggesting that we leverage BIG DATA ANALYTICS to drive insights.
  • Some argue that it is all about OmniChannel architecture. (A consumer-centric approach to multiple channel.)
  • Some are arguing that there's a long standing problem with search and classification in the first place. 
  • Some are about how new payment technologies help. 
  • Some are all about mobile.
A few brave souls, without actually using the word, are alluding to information asymmetry between consumers and retailers. There's little doubt that it's finally having an effect. There's even evidence that it has big effects on loyalty and retention.

If you're there, I'll see you there. If you're not, I'll write what I found.

***

I'm Christopher Berry.
Find out how I build recommendation engines at Authintic.


[1] The same 25 or so metrics are used across many ecommerce operations. The degree to which some are 'key' performance indicators is variable.

Monday, January 7, 2013

The Big Data Project

It's the best time to get involved with the Digital Analytics Association's research committee. There are some great projects that are going into the planning phase in January. Among them is the Big Data Project.

The aim of the Big Data project is to get a handle on what's out there and what are bright people doing with it. That's all that's been defined. We're entering into the planning phase.

What's out there? How much of it is steak? How much of it is sizzle? What are analysts using? What are they doing with it? What sort of hurdles have they overcome? How have their expectations diverged from their preferences?

In spite of the vague list above, the research questions haven't been nailed down. The interview guide hasn't been defined. The interview targets haven't been identified. The format of the output hasn't been defined. The working group hasn't been fleshed out. We're really early on this.

What's in it for you? What can you expect to get out of it?

The experience.

You get to meet and work with people that you wouldn't ordinarily get to work with. You get to ask questions of people that you don't normally ask questions of. You get to objectively answer big questions. That's what motivates me. And the research committee tends to attract some of the most curious people.

It's curious people exercising curiosity.

If you're a member of the Digital Analytics Association, now is a great time to get involved. If you're not a member, this is a good reason to become involved.

Email waa.research@webanalyticsassociation.com to get an invite to the next meeting, which are traditionally the third wednesday every month, and always at 3pm EST.

***

I'm Christopher Berry.
I tweet too.


Friday, December 28, 2012

Ask Bigger Questions in 2013

Ask bigger questions in 2013.

Because we can. And, more importantly, we'd all be better off for asking.

The trends that predate most of our careers continue to accelerate:
  • Data has always been getting bigger (since the first phonograph?)
  • Data has always been getting cheaper (since cave drawings or writing on animal skins?)
  • Media has always been getting more fragmented (since language was invented?)

These trends even predate the oldest readers of this space. Nothing new. Get off my lawn.

What is the new is just how wide the gap is between what we know, and what we could seemingly know. I'm astonished by just how much I don't yet know, even though so much is now available.

Importantly, we've crossed a threshold where some leadership, in some quarters, have gone from thinking that competing on analytics is something nice to do, as insurance or a hedge, to really worrying about how their competitors are eating their lunch with analytics.

So the budget is there.

Here are a few nominations:
  • Do behavioral methods with attitudinal intercepts generate more reliable insights than traditional focus group and long form survey methods?
  • Is there a better way to conceptualize, record, and intercept touch-points (and map so-called moments of truth) in the consumer purchasing journey?
  • What experiences make a difference in the consumer journey and which are waste?
Whatever you're curious about, make it big, and make it matter.


***

I'm Christopher Berry
More at authintic.com/blog/

Thursday, December 27, 2012

Why term growth hacker needs less sizzle

The valley is quite smitten with the word Growth Hacker. Example. Example. Rebuttal.


Growth hacking is the term used by some to describe the process of manipulating a set of product features to cause optimal returns in user growth, under budget and time constraints.

Great growth hackers blend scientific management with scientific marketing to generate better marketing outcomes. And, since the primary metric of success in the valley is user base size, that's what they seek to maximize.

Most web analysts would recognize the word 'growth hacking' to mean 'optimization', and leave at that. And yet, most web analysts, during most of the period spanning 1993-2010, didn't actively manage the website, little though any SaaS. Web analysts, by and large, were not empowered (or did not empower themselves) to generate better outcomes themselves.

Growth hackers do. They have the budget and authority to do so. And, to do so scientifically. That's powerful.

It's sort of like strategic product management, an inevitable hybrid role mandated by anybody really serious about adhering to the Lean Startup movement.

It's a terrible word, but it is worth understanding the underlining empowerment intended. The label growth hacker really undermines the value of the role.

Less sizzle.

***

I'm Christopher Berry
More at christopherberry.ca

Wednesday, December 26, 2012

Managing multi-disciplinary analytics teams

So you got a person for display, one for search, a person for site search, one for interstitial, two and a half for omniture (you supposedly have a half time equivalent in IT, but not really), a person for conversion, a person for paid search, a person to liaise with CMS tracking, a web analyst for microsites (read: Google Analytics deploys), and a guy who tracks apps. Oh, and a few contractors. And, you have a quarter of a headcount from the creative agency.

The common situation you've inherited is:
  • None of them are talking to each other
  • Your department sends out an average of 44 auto-generated and semi-auto-generated spreadsheets a week
  • You report to three SVP's with dotted lines to another twelve EVP's
Three tips:

1. Get everybody talking and experiencing each others' challenges

Analysts have a tendency to define themselves by the tools they use. You really have 11, maybe 12, digital analysts with particular focuses. But those focuses should not override all the experiences they have. It's up to you to get all 11 working as a team, together, as opposed to 11 individuals fighting for their particular channels point of view. Talking in a safe culture causes trust, and trust causes excellent exchanges and collaborations. You're going to need a lot that.

Management Dim Sum:
  • Special email distribution list just for the 11 of you to share links, information. (Set up another folder if you don't like the volume)
  • Weekly hell-or-high water meetings that happen at the same time and place regardless, go around the horn so that others are aware of what each other is working on
  • Install an open source version of Reddit on your internal intranet and share links
  • Off sites (Toronto Data Mining Forum, Toronto Data Science Group, Web Analytics Wednesdays...)
  • Strategic output design and execution

 2. Divide the tactical outputs from the strategic outputs

For each artifact that is leaving your department, identify the audience, cadence, content, and priority. If possible, set up tracking on those artifacts to assess actual use.

Define a few artifacts that are strategic (you only have the resources to do a few of them), in that they would generate wins for the organization and satisfaction with your team's work.

Strategic outputs generally require groups of analysts to work with each other. See the first point if you have problems with people talking to each other.

Management Dim Sum:
  • 80/20 rule applies to the outputs; 80% of what is produced isn't opened. 20% is. Retire or automate accordingly
  • Get clarity on what really matters - guaranteed nobody ever regretted creating more automated reports
  • Automate the mundane and boring (there's no reason why people should be manually doing what machines can do so much better.)
  • Focus on understanding the relationship amongst variables, avoid the trap of engaging in metric bullshit bingo

3. Have a vision

You should have a vision for what you want the team to be, and where you all want to end up at different time horizons. Execute towards that vision.

The return on vision compounds just like interest.

Management Dim Sum:

Write it out as you would with a business plan
Specify the logic as to why you believe certain actions will precipitate specific outcomes
Define which political factors are out of your control
Codify the logic into policies or a guidebook and review it with the team
Indoctrinate all new hires




That's it.

Got any dim sum you'd like to share?

***

I'm Christopher Berry
This is how I'm relevant to the future.