Advocacy tools are only one side of the government accountability equation

Increased government accountability and citizen engagement won’t come from more advocacy tools — we need contractor reform and better, more open CRM tools for elected officials


More public engagement is leading to an information overload for public officials.

Technology on Capital Hill, and in most elected officials offices across the country, is terrible – it consists of outdated physical technology, software that was oftentimes implemented merely because some paper-pushing company won a bid for a contract, and web/technology standards that would make Internet Explorer-enthusiasts blush.

There is even a non-profit working with members of Congress called the Congressional Management Foundation (CMF), who focuses nearly full time on trying to find ways to make it easier for members of Congress to organize their staff internally and communicate the public. CMF has released a number of studies on the dramatic increase in correspondences being directed by the public at members of Congress, but it seems like they always come up with inside the Beltway solutions – putting Band-Aids on hemorrhaging wounds instead of trying to call for outside support to re-think solutions.

In 2011, CMF released a study on the dramatic increase in time it takes for members of Congress to respond to constituents, with a huge number of offices indicating it regularly takes them more than 9 weeks to respond to a constituent, and many offices voicing the opinion that they don’t have the resources to respond to constituents.

This trend towards slow responses from elected officials all across the country will only increase as more and more products and advocacy apps come on the market to make it easier to “digitally scream” at elected officials. The folks working on these applications are doing gods work and it’s important, but it’s also ignoring the hard reality of being in elected office – it’s becoming harder and harder to respond to constituents and the public, and the technologies to support elected officials are being developed by a small group of DC-centric companies that are really great at navigating the contracting processes, but pretty miserable when it comes to actually making software that will stand the test of time. We need more people working on apps and data standards that will help elected officials adapt to the ongoing deluge of communication across multiple platforms, and provide an infrastructure that makes it possible for 3rd party developers to be part of the solution instead of part of the problem.

Where is the breakdown in the feedback loop?

Engaging with elected officials is the classic chicken vs. egg scenario. Are people not being heard by elected officials because they don’t know how to contact elected officials, or are elected officials not hearing the public and their constituents because they don’t have a great way to manage the flood of incoming data and respond to it appropriately?

Recently, announced a program called “Decision Makers” to help elected officials and petition targets talk back to petition signers. Another effort was recently announced called AskThem, which is a clone of the We the People petition platform built by the White House complimented by the amazing elected official contact form database from DemocracyMaps. The goal of both Decision Makers and AskThem is to provide a better feedback loop for people looking to engage elected officials and other people in positions of power – unfortunately both platforms, and many of the new advocacy tools, are focusing most of their work on the user side (the public) and not on the service side (the elected officials). It appears AskThem will have some tools to let elected officials correspond with petition signers, but that appears to be more of an “add on” and not the focus of their tool – and it’s definitely not a replacement for internal CRM tools.

There are also dozens if not hundreds of applications on the internet that make it “easier” to speak your mind to elected officials, beyond the very public forums of Facebook, Twitter, and other social networks. The good folks at the Sunlight Foundation built many of these “advocacy” apps, or they provided the database infrastructure for them. Sunlight even just took over the popular from the Participatory Politics Foundation, which means there is another strong platform to make it easier to understand what’s going on in Congress and reach out to elected officials.

In short, it’s becoming easier and easier to reach out to elected officials, but that’s not improving the feedback loop or making it easier for elected officials to manage the growing deluge. In five years, it’s probably going to be even easier to send messages to elected officials all across the country – and we’ll probably be able to do it from our Google glasses or watch-phones – the user side is being overwhelmed with solutions, and every time another advocacy app is released, it weighs down the side of the formula that is already heavily over-weighted. And once again, elected officials are left with the process to sort through the mess of communication and try to figure out the best ways to respond to everyone, and the best tools to do it.

It’s hard to paint elected officials as victims in this cycle of public engagement, but do we really expect elected officials to be able to run a sophisticated constituent response system through Hootsuite, or TweetDeck, or some other tool that is not easily integrated into their main CRM platform? Do we really expect our government to be accountable if everyone is digitally screaming but no one can easily listen, organize or respond to those concerns?

The government contracting process for technology is broken all across the country.

The most public example of a broken government contracting process for technology has been, but that’s just the tip of the iceberg.

  • Check out the website for The Official U.S. Time: — someone was paid money to build that – and even though they were probably paid that money in 1996, it’s still a real reminder that the government isn’t spending much time on technology decisions. </pun>
  • Or are you interested in checking out the U.S. Mint website that appears it could inject some sort of malware onto your computer at any point during your visit?: This site is proof that if you stick with something long enough (animated gif’s) they will come back in vogue.

Both of these federal websites are examples of agencies just not caring, or potentially not spending much money on their websites, but it’s also part of a larger problem where government agencies just aren’t empowered to make good technology contracting decisions.

Clay Johnson and Harper Reed wrote a great piece on federal technology contracting last week that hit the nail on the head:

“Much of the problem has to do with the way the government buys things. The government has to follow a code called the Federal Acquisition Regulation, which is more than 1,800 pages of legalese that all but ensure that the companies that win government contracts, like the ones put out to build, are those that can navigate the regulations best, but not necessarily do the best job.”

They went on to highlight why there needs to be more technology contracting reforms:

“Government should be as participatory and as interactive with its citizens as our political process is. A digital candidate will never be able to become a digital president if he can’t bring the innovation that helped him win election into the Oval Office to help him govern.”

Clay and Harper wrote about problems in our federal technology contracting system, but their concerns about technology contracting could easily be replicated all the way down to the local level. Just think about how bad the technology contracting is on the federal level, and then imagine how bad it could be on the city-level.

The blog accidently made a strong argument for local technology contracting reform with their 2012 Best and Worst City websites in Broward County Florida – some of their “best websites” are even scary.

But pick a random mid-sized city in your state and check out their local government website – it’s most likely a mish-mash of technologies, almost certainly doesn’t look right on a mobile phone, and is probably a shit-show on the backend —  which makes it really hard for the internal staff to manage constituent responses.

Now, step back and realize that there are nearly 300 cities across the country with over 100k residents, there are nearly 20k municipal governments, over 30k incorporated cities – and trying to ensure that all of these elected officials and people who are paid for with taxpayer money have access to a system that makes it feasible to respond to constituents is extremely daunting.

But even though there is a huge difference in needs, budgets and resources, one thing is essentially constant across all these government offices – their budgets are paid for by constituents and taxpayers. All of these taxpayers want accountability, but in order for that to happen, technology and advocacy communities need to come together to make it easier for elected officials to effectively manage websites, constituents responses, and standardize the process, so that in 10 years, this is a problem that we’re on the way to managing, not a growing problem that is becoming harder and harder to reverse. Unfortunately in order to do that, we probably need some sort of contracting reform at all levels of government.

Constituent verification is extremely difficult with social networks — feedback loops continue to get more complicated

Typically, elected officials will only respond to their own constituents, this is somewhat due to the fact that they don’t give a shit about someone unless that person can vote them out of office (they would probably argue that they also care about someone if they can write a huge check to their re-election campaign), but more practically, elected officials limit communication to only constituents because they and their staff have so many conversations to manage, so the easiest way to ensure that they can actually follow-up with constituents is to arbitrarily say that they won’t follow-up with people outside their district.

When people communicate with elected officials via social media and  3rd party advocacy apps, most elected officials only respond if they really have to. They also probably rarely know about a lot of the complaints and criticism they receive on these platforms, unless they have a great team of staffers monitoring every page on the internet (sounds like a job for Carlos Danger!).

Also, based on the surveys done by the Congressional Management Foundation, most offices in Congress take weeks to reply to constituents – some local offices are probably better than that, but it’s still a slow process. Most of these responses come from people who directly emailed, called or wrote a letter to the office. But what happens when someone signs a petition and it’s just hand-delivered? Or what happens when someone joins a Twitter campaign or posts a question on an elected officials Facebook page? The vast majority of these responses are ignored because there isn’t an easy way to sync them into one centralized system and flag them as needing follow-up. Furthermore, elected officials can’t easily verify whether a commenter on Facebook or someone on Twitter is actually their constituent, so currently, those types of communication are essentially ignored, which leaves people feeling like elected officials just aren’t listening. Elected officials aren’t ignoring them per say, but they are merely being overwhelmed with the enormous amount of places that someone could be digitally shouting for feedback.

So instead of online activists trying to figure out more ways to throw shit on the proverbial communication wall, they should be trying to figure out better ways to deliver feedback to members of Congress, and other elected officials, in a standardized format, and within a platform that provides the flexibility to adapt to 21st century technologies.

Constituent Relations is only as good as the CRM managing it — new constituent management platforms need to be developed and empowered from city councils all the way to Congress

Most members of Congress manage their constituent relations with a similar CRM tool – basically it allows them to classify messages under certain categories, flag constituents for follow-up, manage staff priorities and better understand the direct feedback their office is receiving. But the problem is that this tool is totally private – none of the data is flowing back out publicly, and the only time the public knows that X people called the elected officials office to complain about Y issue is when the office decides to release that information publicly.

Furthermore, the tools that most offices use to manage constituent relations were not built to be open, and they weren’t built with 3rd party developers in mind. Image how much easier it would be if there was a data standard for contacting elected officials, and any 3rd party app that was trying to conduct online advocacy could implement the standard to ensure that feedback through their app got to elected officials in a secure, standardized way to ensure that their app users would actually get some sort of response from an elected official.

In 5 or 10 years, it should ideally be easy for developers to create applications that not only let people speak with elected officials from all levels of the government, but do-so in a way that ensures the elected officials are actually being engaged in a productive discussion, and not just getting digitally spammed.

Furthermore, whatever data standard or CRM platform is eventually pushed upon more elected officials should also make it easier to provide transparency – elected officials should be able to turn on public portals so people could see how many constituents asked about X issue on any given day, or how many messages the offices has been sending out. These reports don’t have to get too detailed, but the public should have a better sense for what’s going on with digital communications inside offices – and that type of transparency would go along way towards increasing the trust people have for their elected officials, and also help people better understand how staff members spend the majority of their time.

In conclusion, this two-sided formula for government transparency is not going to be solved overnight – but it seems like more and more efforts are being directed to one side of the equation (engagement with elected officials) while not working on the other side of the equation (better CRM’s and tools for elected officials). Hopefully in the future we start to see this balance out.

Newspapers innovating with API’s the way you’d expect newspapers to innovate with API’s


Metered-digital paywalls, restrictive Terms of Use, and data-limited API’s present problems for newspapers trying to enter the 21st century

If you had a time machine and went back to the early 1990’s, I bet you could sit in a conference room with newspaper executives and hear them talk about the Internet as though it were merely a series of tubes — something that a dump truck could quite literally get stuck in — and nothing they should concern themselves with.

Fast forward to 2013 and most newspaper executives have come around to the fact that their industry is hemorrhaging readers and burning through money faster than people can use newspapers to start campfires (I believe kindling for camping trips has become one of their big selling points). In short, the Internet killed the newspaper star.

But there has been a glimmer of hope on the horizon for some of the papers that are trying to offset the losses from the dramatic decrease in print distribution and advertising — the metered digital paywall — which has successfully increased digital profits by requiring someone to purchase a subscription after viewing X articles per month. This type of system helps to prevent huge drop-offs in digital advertising revenue by ensuring that ads will still be shown to new visitors who organically found a store, while also encouraging new digital subscriptions.

So that’s the end of the story, right? Metered digital paywalls + digital advertising + physical delivery + print advertising = 100 more years of the newspaper golden age!

Not quite … newspapers are still trying to figure out how to actually build a 21st century product. Last year, there were four big publishers who had API’s for their newspaper content: The Guardian, The New York Times, USA Today, and NPR. Today, there are a couple more papers with API’s, including the Washington Post’s nascent efforts, and the sophisticated Zeit Online API archive, which is unfortunately only available for non-commercial use.

And why do newspapers need to build an API? The main reason is that API’s spur innovation, experimentation and they empower 3rd party developers to build apps on top of existing data. For a newspaper, an API means that someone could actually build something out of newspaper articles, or test new designs, or new ways to read and manage newspaper articles, or explore the big data world of a newspaper archive. API’s present newspapers with a hope of being able to cross the bridge into the 21st century, but they also lead to a series of problems due to the current metered digital paywall strategies, and restrictive Terms of Use.

A newspaper API is useless if it’s hampered by Terms of Use and paywall restrictions

Less than 30 people have watched this YouTube video from August 2013 featuring Erik Bursch, the Director of IT Operations and Content Systems for USA Today, and it explains everything that is wrong with the newspaper industry. First, probably the most important news from the video comes around the 17-minute mark — Gannett papers are developing an API for ALL of their newspapers using Mashery, which could be as many as 100 papers opening up data. Erik stated:

“We’ve really come full circle since 2010, you know with the Terms of Use change speaks loudly to what the perception changes inside of Gannett and USA Today. And then at that point, Gannett has looked at what the USA Today API has done for them, done for us, excuse me, and is now replicating that app to all the Gannett properties, which is in development right now. So Gannett and USA today will have that same API layer across all properties, which will be a huge win all around for us.”

In the video, Erik also goes into great detail about how USA Today has worked to be “the first and the best” and how they have iteratively developed their API based on developer and licensee feedback. He also spoke about how USA Today has three or four meetings a week debating how to provide the API data, and they came up with two really important conclusions:

1.) USA Today made a rather revolutionary change to their API Terms of Use to allow commercial use of their data, which brings them up to par with The Guardian. This means that a 3rd party developer can actually build an app that generates a source of revenue and not be in violation of the API Terms of Use. This is huge — revolutionary, especially since Gannett is using USA Today as their model for their big newspaper API expansion. That being said, one big difference currently between USA Today Terms of Use and the Guardian Terms of Use is that the Guardian has a clause that is very smart—and is part of the framework for a Newspaper Article API + Advertising Network that would require someone to, “display on Your Website any advertisement that we supply to you with the OP Content. The position, form and size of any such advertisement must be retained as embedded in the OP Content.”

Unfortunately, the New York Times and NPR still have non-commercial clauses on their newspaper API Terms of Use, which dramatically hurts their ability to offer a developer-friendly API.

NPR is doing fantastic work with their API archive going back to 1995, which includes a cutting edge transcript API for their radio shows, butunfortunately their non-commercial Terms of Use restrictions and a couple dozen other restrictions make it much less likely that people would want to develop innovative apps on top of their API. The New York Times has 14 API’s currently available, but their app gallery only lists 32 apps developed from the data, and if you actually click through the list of apps, it’s a bunch of half-baked apps, literally dead links and websites that are available to be purchased — all in all a pathetic excuse for an API gallery, and a resounding rejection of their API strategy.

2.) USA Today and Gannett can’t figure out how to build an open API within their existing metered paywall structure — This is the saddest news to date within the newspaper API debate — due to the state of the industry and their new reliance on metered paywalls, it’s nearly impossible to find a model where they can actually fully open up the Article API data. Developers can get headlines, excerpts and link backs to the original article, but they can’t get the full text, which dramatically limits how someone could use the data. It’s like opening up the largest and nicest golf driving range in the world and then telling everyone they can only use putters.

So what does this all mean? Essentially, Gannett is currently developing the largest newspaper API the world has ever seen. They are breaking down barriers and have relented to let developers use their data for commercial use. But they also appear to not have a solution for how to ACTUALLY open up their data within their existing metered paywall structure. And it also appears they aren’t taking the lead from The Guardian by building out an advertising network tied to their Newspaper Article API, even though both companies use the fantastic API company Mashery.

A good analogy would be that developers are now like that kid who comes down for Christmas to a room full of presents, only to find out that their terrible parents put bike locks around all the presents and swallowed the keys and are forcing the kids to just stare at the presents with big locks around them. Okay that’s a bad analogy, but you get the picture — an API needs to be completely open and available for commercial use in order to spur innovation, otherwise it’s just newspapers using API’s exactly the way you’d expect newspapers to use API’s.

So what are newspapers to do? What are some ways they could innovate? Good question and it’s something I wish that Gannett and other papers would start discussing more openly. There should be hundreds of people engaging in this dialog about what could be valuable within a newspaper archive and article API, along with other data they could hold, and how the entire ecosystem be opened up while still generating profits for the paper.

Newspaper API ideas should come from more than one person, but here are a few

1.) Combine the Article API with an ad network in order to serve up ads on 3rd party websites and in 3rd party apps — The Guardian is already planning/doing this and it seems like the logical way forward. In fact, newspapers building out their API’s would be smart to model a lot of their work off of The Guardian’s API, especially the breakdown of their Content API Terms and Conditions. This model creates a distributed advertising network, provides new opportunities for niche advertising, and puts newspapers back on track to generate more of their money from digital ads.

2.) Breakdown the API access to categories and don’t allow a single domain/app to access more than one API feed. So essentially you could facilitate sports apps, or politics apps, or local news apps — you wouldn’t be giving away the whole enchilada to one organization, but you also would be opening up the full articles/archives so developers could actually do something with them.

3.) Build out a Newspaper Archive API with full-text articles and commercial access and just limit it to stories older than X months/years. This would limit problems with licensing clients while also opening up possibilities for really innovative apps. And with this type of system, merely integrate an Ad Network into the API feed in order to monetize it.

4.) Take niche events like the 2016 Presidential Election and open up articles just around that topic. So essentially create an Election 2016 API that would provide 3rd party developers with a platform and time to think and build out a wide range of innovative apps. Someone could do that for a wide range of niche categories, which could work for a large network like Gannett.

5.) Provide a platform within a paper to compile local content through a “Community Collaboration API.” Newspapers used to be the central place a community would look for news, but with the advent of bloggers, craigslist, independent news organizations, and a host of websites, it’s becoming harder and harder to find one central place for community news. Newspapers could develop a “Community Collaboration API” and build out services/plugins for the major blogging platforms like WordPress, Drupal, Wix, etc, so that bloggers could push their content to a larger audience housed on the newspaper website. The content could be categorized and organized, and newspapers could become the “Drudge Report” of local link farming, but focused on niche local blogs and issues.

6.) It needs to be easier to create API’s and people need to be better educated about how a data mashup can reveal trends and facilitate data-driven decision making. A great company working on this is, and one of their funders is the new Washington Post owner Jeff Bezos. Perhaps we’ll see something from WaPoLabs that will make API management and data-mashups between papers, bloggers and archives easier, but we’ll have to wait and see.

Finally, I’m just one guy ranting about what would be nice to see in a Newspaper API and developer environment– but there are hundreds if not thousands of people more qualified to talk about this subject. News organizations like Gannett need to open up discussions when they are developing their API, not AFTER they have already developed the API. The developer community shouldn’t be finding out about the largest newspaper API in the world through an errant comment on a YouTube video. It’s time to start talking honestly about how newspapers can prosper in the 21st century, how they can encourage innovation, and how developers and news organizations can work together to better inform, educate and entertain the public.

What poker can teach you about A/B tests and iterative website optimization


Cross posted over @ MediumTesting is the poker tell of the digital world.

During the online poker boom of the mid 2000’s, I was a professional poker player for nearly three years, living in Las Vegas and grinding out a living playing mostly Omaha Hi/Lo poker online and in person. While I was playing full-time, I read dozens of books and poker magazines, and became a student of the math behind poker as well as the psychology behind people’s motivations. I also wrote in a diary almost every day, and I still have hundreds of scenarios written down in a handful of notebooks that painstakingly analyzed hands and the players at the table. It was as cathartic as it was educational – a way to internalize lessons learned in an attempt to improve my game.

Since I “retired” from poker in mid 2007 and started to focus on digital strategy, I still find myself constantly referencing things I learned from poker, and probably one of the most relevant comparisons to poker is the process to optimize and refine a website through A/B and multivariate testing.

When people first start playing poker, they typically just play with gut instinct – a bunch of red cards means they could catch a flush – high pairs make their heart jump – an inside straight draws look good because they “feel it.” Unfortunately, that’s also how many people design and manage websites – gut instincts. They think slideshows on the homepage look modern – they feel like the sidebar needs more graphics — they think the text of a call to action “just sounds good” or they saw another website do something similar without understanding the context.

With both of these “gut instinct” approaches, some people can get lucky, but luck only lasts so-long, and if you don’t approach poker and website optimization from a data-driven perspective, you’re going to end up missing out on a lot of opportunities and making a lot of mistakes.

Another way to think about digital optimization (A/B, multivariate testing) is that it’s the poker tell of the digital world. It’s the tactic that makes you profitable – it’s the tactic that helps you beat your competitors – and it’s the tactic a sophisticated operator utilizes in order to iteratively improve their performance.

Gut instincts typically rely on too few data points

In poker, every person is unique, and every scenario is unique, so the outcome of one hand shouldn’t robotically dictate your strategy for future hands. Sometimes someone will be drunk, or spending money wildly, or your table image will be lose, and you could end up in a scenario where someone goes all-in on a pair of 2’s and tries to run you down – and then they could get lucky on the river. But if you always assumed that people with a pair of 2’s were going to beat you, you would never play optimally and it would dramatically hurt your opportunities to make money. Essentially, you can’t use one aberrant data point as the basis for your future strategy, but if you were to combine multiple data points, you may get a better sense for why something has occurred and whether or not it will be replicated. Was there something about that player with a pair of 2’s that showed why they were going to go all in? Is there a way to predict that behavior from another player? Could you see similar behavior from someone with any low pair? These hypotheses are the basis of tests that you can conduct to build up multiple data points to make more informed decisions.

For websites, every visitor is unique, each visitor has unique motivations for visiting your website, and every device and technology stack they are using is unique, so you can’t assume that the opinion or feedback from one person will apply universally.  Sometimes someone will be willing to use their mobile phone to go through 10 steps to make a purchase, scrolling left and right on a page that is optimized for a desktop, and willing to spend extra minutes to setup and confirm an account via email. But if you used that single mobile user as any sort of decision-making data point, you would be making a big mistake by not seeing the countless other mobile visitors who abandoned your website before taking action. You could also possibly get additional data about that aberrant user – what were they purchasing that made them decide to go through those extra hoops in order to make their purchase? Did that person also select overnight shipping? Were they buying what appeared to be a present for someone else? Could you use that information to build easier “last minute” shopping options? It’s important to not heavily weigh one data point in order to make a site-wide decision, but you should take that one data point and dig deeper in order to test hypotheses about user behavior. Once you start thinking about user motivations, aberrant data, and site-wide trends, you can start to make informed decisions about website optimization. This process is also how you start to dig into mobile and tablet usage, which provides you with additional testing scenarios through segmentation and targeting.

Sometimes the data that is less obvious is more valuable

In poker, the most obvious data comes from people turning over their cards at the end of a hand – but some of the most valuable information can come from seeing how often people throw away their hands and in what circumstances. Does someone typically play a hand “under the gun” in first position? Does someone fold 9/10 hands? How often does player X go in a hand if player Y is already in a hand? The data from hand selection can tell you the baseline of cards someone is willing to play, whether they are loose or tight, and their overall understanding of strategy. Once you start to determine someone’s “abandonment score,” which poker legend Phil Helmuth broke down into a more simple classification of someone’s equivalent “poker animal” then you start to be able to make better decisions about why certain groups of people act the way they do. This can help you break down demographic information much quicker and make more informed decisions.

For websites, a lot of people look at the conversions (signups, sales, shares) and try to determine what kinds of people are coming through their funnel or engaging – but more often than not, you can actually get better information and more effectively optimize your website based on who is abandoning your website or funnel, what platform/tech stacks they are using, how they found your website, the point that they abandoned your process, and a wide range of behavior based on where they clicked and what they were viewing. And when it comes down to it, people who finish the conversion funnel or take an action are very similar to people who make it all the way to the end of a poker hand –there were likely things along the way that encouraged them to follow through – but there were also likely factors outside of your control making that happen.  You need to learn why they made it through the funnel because there could be important data points there that you need to understand in order to replicate and increase the likelihood of conversions occurring again in the future, but one of the most important things you can learn is why certain people are not making it through the funnel (aka why someone is folding a hand early) – and then how can you take that information and improve your process to ensure that you increase conversions.

This process to optimize the people who “folded early” is done in large part through an effort known as segmentation and targeting – which is essentially the process of only taking one portion of your website visitors, for instance visitors who came from Facebook and are using a mobile phone or tablet, and then serving them up a specific website variation that is optimized just for them in order to test how that new version affects conversions. That effort of drilling down into segments and targeting specific versions is one of the most important parts of iterative development, and it’s the main reason that most website optimization experts believe that the process to optimize a website is never actually finished – it’s merely a long-term effort that requires someone to segment and target more variations to smaller and smaller subsets of your users. Think of the process like playing poker against the same players over and over again, and building more data points about how they play in unique scenarios, then applying that knowledge to other players.

Poker and website optimization is a game where seemingly small decisions lead to big profits swings

Poker is a game of margins where math reins, but every scenario will remain slightly different and there is always an opportunity to increase your profitability by using psychology and poker tells. In poker, there is a thing called pot odds – where you essentially calculate your chance of making a hand that you think will win a pot with a certain number of cards remaining to be shown, and compare that with the amount of money currently in the pot and the amount of money you’ll have to put in to see the remaining cards. Every good player calculates pot odds throughout a hand, especially when big decisions need to be made, but every great player also realizes that every scenario is different. Perhaps the math would tell you that in 99/100 scenarios you should pay $50 to possibly win $500 — but let’s say that in one specific scenario, your opponent is doing what we call “poker clack” – smacking their lips or doing something out of the ordinary (like eating Oreos) – and you decide to fold your hand to the “made hand” in order to save a little money. That type of specific “I know your tell” scenario is the ideal situation for a website optimization expert – if you can get to a point where you’ve optimized and refined your process to a science and are able to make a minor tweak in order to squeeze just a little bit more profitability, or save a little money, from each transaction, at the end of the year, you’re going to see huge gains.

When optimizing a website, many organizations end up seeing huge gains just from minor changes to language, imagery, processes or value propositions. Sometimes after an A/B test, an organization may realize that they need to completely restructure a web page by dramatically simplifying it, or that they need to make big changes to a sales or signup funnel, but a lot of the time, you’ll be testing small changes in order to squeeze out a small increase in the percentage of converted visitors. Let’s say for instance that you have 5,000 people purchasing a product that costs $25 on your website over a month, out of a total of 250,000 visitors. Let’s say you increased your conversion rate from its current 2% to 2.4% — that would increase your monthly sales from $125,000 to $150,000. This type of dramatic increase in profits wouldn’t be possible if you didn’t’ think about how that small $25 sale could be scaled through an optimized conversion of your entire user base, much the same way that someone may not realize that paying that extra $50 to see someone turn over cards you already knew they had would end up costing huge amounts of money over the long run.

Both poker and website optimization require lots of tests – and it should be fun!

When you’re sitting at a poker table or across the table at a boardroom discussing a website, the process shouldn’t be like pulling teeth. If you want to test something, like bluffing on the river when a blank hits, or if you want to test changing the color of a button, do it! You should approach both poker and website optimization with the mentality of “you should test that!” But the important next step is to learn from your mistakes, and to analyze the data that comes from your tests.

Another important lesson to learn from both poker and website optimization is that everyone is fallible – you may think that a certain bet at the poker table is smart, and end up losing your entire stack. Or you may think that changing the header on a signup page will increase conversions, and end up cutting conversions in half. But the important thing to remember is that you shouldn’t feel bad about conducting a test as long as you learn from it!

I would be remiss to note that there is another thing to consider when testing both at the poker table and with websites – don’t put all your eggs in one basket. You can bluff that river when a blank hits, but you may want to think twice before going all in on that bluff, and instead just bet a healthy chunk of your stack. Similarly, in website optimization, you should rarely test 100% of your traffic on the new variations – you may direct only 15-30% of your traffic to the tested variations and keep the remainder of the traffic going to the original version. This ensures that if you have a hugely unsuccessful test, you aren’t going to dramatically hurt the conversion performance of your website.

Finally, if you’ve never played poker or considered website optimization, there is no time like the present to dig in and just give it the old college try. Neither is as confusing or scary as your competitors would like you to think, and you might end up walking away with a much thicker billfold because of it.

Want to learn more about poker or website optimization?

Read more about testing and get some ideas for website optimization by checking out Dan Siroker and Pete Koomen’s new book A/B Testing: The Most Powerful Way to Turn Clicks Into Customers. Another great read is Chris Goward’s “You Should Test That!” or if you’re looking for a quicker synopsis on the benefits of website optimization and testing, check out Brian Christian’s April 2012 Wired article, “The A/B Test: Inside the Technology That’s Changing the Rules of Business.”

If you are just getting started in poker, you can’t go wrong with Super System by Doyle Brunson, Beat Texas Hold’Em by Tom McEvoy and Shane Smith, Hold’em Poker: for Advanced Players by David Sklansky and Mason Malmuth, High-Low Split Poker by Ray Zee, or of course the always classic Mike Caro poker tells videos.




What can the political advocacy community learn from the open government community?


View a mind map of a proposed infrastructure that would facilitate information sharing in the progressive political advocacy community in order to encourage innovation, collaboration and experimentation. 

Open data doesn’t necessarily create an open government, but an open government can’t exist without open data. Similarly, open political data doesn’t necessarily mean an open political community, but you can’t have an open political community without open political data.

Distributed organizing is quickly becoming the norm for progressive political advocacy organizations like, Democracy for America, CREDO, Corporate Action Network, and many others. It’s effective, it’s scalable, and it empowers individual activists.

But the question remains: how do you take a distributed organizing model within an individual organization, and then collaborate with other organizations to build a movement? And how do you do it in a scalable, automated way to encourage collaboration, innovation, and experimentation within the progressive community?

Open data & open government advocates have learned that standardized and accessible data encourages innovation and experimentation.

One of the open government models that the progressive advocacy community should emulate comes from the Sunlight Foundation and their set of open government API’s. They have spent over seven years finding government data sources, standardizing the data, and serving it up in unique API’s that have facilitated hundreds of open government applications. Sunlight built a brain trust of open government data in the U.S. that is not only transforming the way people process information about the government, but it’s created a collaborative space for open government advocates to gather and build apps.

Another open government model that should be noted by the progressive advocacy community is from DemocracyMap, a data translator/API created by former Presidential Innovation Fellow Phil Ashlock. DemocracyMap uses data from the Sunlight Foundation API’s, and combines it with data from web scrapers to create what he calls a “meta-API that aggregates, normalizes, and caches other data sources including geospatial boundary queries” – in short, his API has nearly 100k contact records of elected officials across the U.S., and his simplified API makes it possible to integrate just one API into a 3rd party app, instead of forcing a developer to connect dozens of external API’s and web scrapers, while also maintaining that process.

Finally, when it comes to standardizing international open government data, projects are still in their infancy. But one effort to organize and debate the standardization process has been led by the awesome Canadian open government group Open North. They launched an effort called The Popolo Project, whose goal “is to author, through community consensus, international open government data specifications relating to the legislative branch of government, so that civil society can spend less time transforming data and more time applying it to the problems they face. A related goal is to make it easier for civic developers to create government transparency, monitoring and engagement websites, by developing reusable open source components that implement the specifications. Although the data specification is designed primarily for open government use cases, many other use cases are supported.”

What these three large and audacious efforts have in common is an understanding that open, accessible data spurs innovation. There is a movement taking place in the open data community – a movement that is making huge gains for the open government community – but far too few people are paying attention to how this work could be translated into the political advocacy community.

How can progressive political advocacy organizations emulate the strategies of open government/data organizations in order to encourage innovation, collaboration, and experimentation? 

The progressive political advocacy community has a strong infrastructure led by 10-20 major groups, dozens of issue-focused groups, hundreds of local and state-based groups, and thousands of sophisticated political operatives and technical developers. Some of the groups are more effective organizers than others, some are more technical, some are better financed – but they are among an infrastructure working to activate and engage millions of individual activists looking to take collective action to affect progressive change.

From a technical organizing perspective, one of the big takeaways over the last few cycles has been that distributed organizing is an effective way to engage and grow a large group by empowering smaller groups and individual activists. These smaller groups launch niche-issue campaigns, hyper-local efforts, and are the ears on the ground to find issues that could be elevated to a national or state-level.

But one big question about distributed organizing remains –with more and more organizations employing this strategy every year, how can we connect this growing distributed movement into a larger technical infrastructure to encourage innovation, collaboration and experimentation? 

View a mind map of a proposed infrastructure that would facilitate information sharing in an automated, machine-readable format.

There are a number of significant hurdles that advocacy organizations would have to overcome in order to build out a similar infrastructure that exists in the open government/data community. Just a few of those hurdles includes:

  • The support and funding from a respected organization or company looking to lead the project and support it’s growth into the future
  • Buy-in from the largest stakeholders in distributed organizing
  • Policies and practices to ensure no personal user data was compromised or shared without explicit approval from end users
  • An agreed-upon standardized data structure that would streamline metadata sharing
  • An agreed-upon authentication standard for an open voting API or any leadership system that was cross-site functional
  • An in-depth depth discussion about the end-goals of a Progressive API, in terms of what types of metadata and resources should be served up for 3rd party app providers in order to encourage progressive innovation, experimentation and collaboration
  • An in-depth discussion with the largest stakeholders about what types of information could be useful in a limited access Progressive Dashboard that would facilitate better information sharing and collaboration

These are just a few of the outstanding issues that would need to be discussed – not to mention whether this technical infrastructure would actually be useful to the movement at large.

How we can take the Green Button API lessons and apply them to an Open Voting API?

Data standardization + Authentication standardization = Scalable data sharing

Green Button is a concept conceived by President Obama’s White House and is being implemented by the energy industry across the country – the goal was to provide near real-time access to energy usage in a home or a business.

The Green Button concept was relatively straightforward: create a data standard for energy usage so metadata produced from energy companies could be machine readable by 3rd party apps, and create an authentication standard so that 3rd party apps could connect to all energy companies through the same process. So far, about 30 million households have access to Green Button data, mostly in California, and dozens of 3rd party app providers have launched apps to help people better understand their energy usage.

You may be wondering – how could we apply the lessons from Green Button, a concept rooted in hard-data from energy usage, to something more abstract, like an online voting application?

Recently, Micah Sifry wrote a thought-provoking piece, “You Can’t A/B Test Your Response to Syria.” In his article, he gently poked holes in efforts by progressive groups to poll their members to determine what position the organizations should take on Syria. Essentially, he claimed that the small-scale voting efforts weren’t representative of the progressive movement at-large, and the polling was merely a CYA strategy.

If progressives were to take a queue from Green Button, it’s possible they could develop a cross-site voting infrastructure that would let groups not only vote across multiple sites and organizations – think of it like a distributed voting and leadership infrastructure — but it could also empower more innovative 3rd party apps that are built to conduct group decision making. The concept is relatively simple, just like green Button, but the implementation would certainly have a number of technical and logistical hurdles. Some of these hurdles include:

  • How would organizations want to conduct cross-site voting?
  • What process would need to be implemented in order to ensure some sort of voting integrity or voting authentication standards?
  • How could someone’s vote spin off into a separate leadership group? For instance, everyone who voted “no” about whether the U.S. should bomb Syria, could those people be added to a leadership group to discuss additional strategies?
  • How could a cross-site leadership and voting infrastructure be implemented by the largest groups in a way that would ensure they aren’t marginalizing their own power to engage and motivate their members?
  • How could/should voting data be anonymized in order to break down demographic and geographic information?

There are numerous groups already trying to solve the “group decision making” problem, and some of them may have already moved this ball forward to some extent. But one of the big problems with a top-down decision from one company or organization is that all of the stakeholders don’t get a chance to talk about issues with the system. The Green Button data standard has gone through years of intense debate in order to ensure that the energy company stakeholders were on board with the decisions and so that they could implement whatever data standard/authentication process was decided upon. It’s my belief that any distributed voting or leadership effort that attempts to bring on board the largest progressive groups, needs to directly address their concerns throughout the entire deliberative process.

Should the Progressive Political Advocacy community attempt to build a system similar to what has been built out by the open government/data community?

The process to build out a technical infrastructure to encourage innovation, collaboration and experimentation within the progressive political advocacy community could take years of work. Stakeholders would need to be brought to the table, high-level discussions with app developers would need to take place, and individual activists would need to provide their feedback on the proposed infrastructure. Beyond that, there are a handful of things that would need to be debated:

  • Research and catalog unique data sources/groups to get a sense for the political advocacy landscape
  • Debate and determine standards for metadata sharing
  • Debate and determine authentication standards
  • Create an application to translate unique data source metadata into the standard
  • Serve the metadata standard through a JSON API to 3rd party app providers
  • Provide metadata analysis through a limited-access application
  • Determine the feasibility of an open voting API and implementation steps

There would likely be dozens of other smaller hurdles along the way that would need to be overcome, but it’s my belief that this type of infrastructure is the key to tying everything together.

We all use different CRM/CMS platforms. Some groups custom code projects. Some groups use open source projects. New apps and websites are coming online all the time with the “cure all” to our problems. New group decision-making apps are created every year. New best-practices apps and organizations are launching all the time. But individual apps and organizations just don’t seem like they will ever be able to build out an infrastructure, grow themselves into sustainable efforts, and provide the collaborative, innovative technology to move the progressive advocacy community forward over the next 20+ years.

Organizers oftentimes say that people are the solution to our problems. Technologists oftentimes say that digital is the solution. Old-school strategists point to local organizing and distributed decision making as the key to growing our movement. New-school strategists point to list building and distributed organizing supported by large organizations as tenants of new organizing. The question remains though: is any one strategy the correct one? How can we bridge all of these concepts together in a way that won’t solve the problems we’re seeing today, but will solve the problems in 5 years, 10 years, 20 years down the line.

Finally, some people may wonder why this process should be limited to the progressive political advocacy community – and why not build something out like this for people of all political persuasions. The main reason is that this process is complicated enough without trying to bring together politically disparate groups and individuals and try to get them to work together. It would be like trying to teach 100 cats to dance a synchronized swimming routine – perhaps fun to watch, and amazing if it actually worked out – but utterly unrealistic when looking at the challenges at hand.


 definitely didn’t fix the problem


I’m a huge fan of Google Alerts — I use it on a regular basis to keep track of various topics and I’ve found it to be immensely helpful for research projects. That being said, the one complaint I’ve always had is that you can’t schedule when you want to receive the alerts. The way the system is setup, if you create your alert at 9 pm, you’ll typically get your daily alerts around 9 pm. That’s a problem when you have a bunch of alerts setup and you would ideally want them to arrive in the morning — to fix the issue, you have to delete and recreate the alerts — which is definitely annoying. So back in 2012, I decided to quickly spin up a website, encouraging Google to invest a little time (and money) into fixing that issue. I’ve been about a year and a half, and even though I heard that a few of the engineers working on the project got a good chuckle out of the website, end users are still waiting on the fix. My guess is that Google Alerts is just another one of the products that Google isn’t interested in investing additional money into, but I feel like it’s one of their better products that just needs a little TLC.

19k Digitized Iowa Campaign Finance Records


Searching for campaign finance records on the Iowa Ethics and Campaign Disclosure board website has always been a nightmare — files are stored in PDF format and aren’t machine readable, so content is essentially siloed in an outdated system. So in February 2011, I decided to download 19k records and use Scribd to digitize and make the campaign finance records searchable. While the data is outdated now, I felt like it showed how easy it was for someone to use a little 21st century technology to improve a system built for a bygone era of campaign finance records submitted via paper documents.

You can view and search the 19k documents on Scribd here.

McCain vs Iowa


While not the prettiest site, I hard-coded in 2008 to showcase positions that Senator John McCain had taken in his career that were diametrically opposed to issues that most Iowans care about. The concept was cloned in battleground states across the country by the DNC. Check out the site on the Way Back Machine. I also produced six videos for the website with Photoshop, Final Cut Pro, Soundtracks, and LiveType. The videos use McCain’s own words and the words of other  elected officials to prove the argument that McCain was out of touch with Iowans. This was my first real introduction to video research and the majority of the content was either found on CSPAN or on YouTube.

McCain vs Iowa Republicans

Broken Record

McCain vs Equal Pay

McCain vs Rural Health Care

McCain vs Iowa Innovation

McCain Vs Iowa