Advocacy tools are only one side of the government accountability equation

Increased government accountability and citizen engagement won’t come from more advocacy tools — we need contractor reform and better, more open CRM tools for elected officials

platformequality

More public engagement is leading to an information overload for public officials.

Technology on Capital Hill, and in most elected officials offices across the country, is terrible – it consists of outdated physical technology, software that was oftentimes implemented merely because some paper-pushing company won a bid for a contract, and web/technology standards that would make Internet Explorer-enthusiasts blush.

There is even a non-profit working with members of Congress called the Congressional Management Foundation (CMF), who focuses nearly full time on trying to find ways to make it easier for members of Congress to organize their staff internally and communicate the public. CMF has released a number of studies on the dramatic increase in correspondences being directed by the public at members of Congress, but it seems like they always come up with inside the Beltway solutions – putting Band-Aids on hemorrhaging wounds instead of trying to call for outside support to re-think solutions.

In 2011, CMF released a study on the dramatic increase in time it takes for members of Congress to respond to constituents, with a huge number of offices indicating it regularly takes them more than 9 weeks to respond to a constituent, and many offices voicing the opinion that they don’t have the resources to respond to constituents.

This trend towards slow responses from elected officials all across the country will only increase as more and more products and advocacy apps come on the market to make it easier to “digitally scream” at elected officials. The folks working on these applications are doing gods work and it’s important, but it’s also ignoring the hard reality of being in elected office – it’s becoming harder and harder to respond to constituents and the public, and the technologies to support elected officials are being developed by a small group of DC-centric companies that are really great at navigating the contracting processes, but pretty miserable when it comes to actually making software that will stand the test of time. We need more people working on apps and data standards that will help elected officials adapt to the ongoing deluge of communication across multiple platforms, and provide an infrastructure that makes it possible for 3rd party developers to be part of the solution instead of part of the problem.

Where is the breakdown in the feedback loop?

Engaging with elected officials is the classic chicken vs. egg scenario. Are people not being heard by elected officials because they don’t know how to contact elected officials, or are elected officials not hearing the public and their constituents because they don’t have a great way to manage the flood of incoming data and respond to it appropriately?

Recently, Change.org announced a program called “Decision Makers” to help elected officials and petition targets talk back to petition signers. Another effort was recently announced called AskThem, which is a clone of the We the People petition platform built by the White House complimented by the amazing elected official contact form database from DemocracyMaps. The goal of both Decision Makers and AskThem is to provide a better feedback loop for people looking to engage elected officials and other people in positions of power – unfortunately both platforms, and many of the new advocacy tools, are focusing most of their work on the user side (the public) and not on the service side (the elected officials). It appears AskThem will have some tools to let elected officials correspond with petition signers, but that appears to be more of an “add on” and not the focus of their tool – and it’s definitely not a replacement for internal CRM tools.

There are also dozens if not hundreds of applications on the internet that make it “easier” to speak your mind to elected officials, beyond the very public forums of Facebook, Twitter, and other social networks. The good folks at the Sunlight Foundation built many of these “advocacy” apps, or they provided the database infrastructure for them. Sunlight even just took over the popular OpenCongress.org from the Participatory Politics Foundation, which means there is another strong platform to make it easier to understand what’s going on in Congress and reach out to elected officials.

In short, it’s becoming easier and easier to reach out to elected officials, but that’s not improving the feedback loop or making it easier for elected officials to manage the growing deluge. In five years, it’s probably going to be even easier to send messages to elected officials all across the country – and we’ll probably be able to do it from our Google glasses or watch-phones – the user side is being overwhelmed with solutions, and every time another advocacy app is released, it weighs down the side of the formula that is already heavily over-weighted. And once again, elected officials are left with the process to sort through the mess of communication and try to figure out the best ways to respond to everyone, and the best tools to do it.

It’s hard to paint elected officials as victims in this cycle of public engagement, but do we really expect elected officials to be able to run a sophisticated constituent response system through Hootsuite, or TweetDeck, or some other tool that is not easily integrated into their main CRM platform? Do we really expect our government to be accountable if everyone is digitally screaming but no one can easily listen, organize or respond to those concerns?

The government contracting process for technology is broken all across the country.

The most public example of a broken government contracting process for technology has been Healthcare.gov, but that’s just the tip of the iceberg.

  • Check out the website for The Official U.S. Time: https://www.time.gov/ — someone was paid money to build that – and even though they were probably paid that money in 1996, it’s still a real reminder that the government isn’t spending much time on technology decisions. </pun>
  • Or are you interested in checking out the U.S. Mint website that appears it could inject some sort of malware onto your computer at any point during your visit?: https://www.usmint.gov/ This site is proof that if you stick with something long enough (animated gif’s) they will come back in vogue.

Both of these federal websites are examples of agencies just not caring, or potentially not spending much money on their websites, but it’s also part of a larger problem where government agencies just aren’t empowered to make good technology contracting decisions.

Clay Johnson and Harper Reed wrote a great piece on federal technology contracting last week that hit the nail on the head:

“Much of the problem has to do with the way the government buys things. The government has to follow a code called the Federal Acquisition Regulation, which is more than 1,800 pages of legalese that all but ensure that the companies that win government contracts, like the ones put out to build HealthCare.gov, are those that can navigate the regulations best, but not necessarily do the best job.”

They went on to highlight why there needs to be more technology contracting reforms:

“Government should be as participatory and as interactive with its citizens as our political process is. A digital candidate will never be able to become a digital president if he can’t bring the innovation that helped him win election into the Oval Office to help him govern.”

Clay and Harper wrote about problems in our federal technology contracting system, but their concerns about technology contracting could easily be replicated all the way down to the local level. Just think about how bad the technology contracting is on the federal level, and then imagine how bad it could be on the city-level.

The blog CoralSpringsTalk.com accidently made a strong argument for local technology contracting reform with their 2012 Best and Worst City websites in Broward County Florida – some of their “best websites” are even scary.

But pick a random mid-sized city in your state and check out their local government website – it’s most likely a mish-mash of technologies, almost certainly doesn’t look right on a mobile phone, and is probably a shit-show on the backend —  which makes it really hard for the internal staff to manage constituent responses.

Now, step back and realize that there are nearly 300 cities across the country with over 100k residents, there are nearly 20k municipal governments, over 30k incorporated cities – and trying to ensure that all of these elected officials and people who are paid for with taxpayer money have access to a system that makes it feasible to respond to constituents is extremely daunting.

But even though there is a huge difference in needs, budgets and resources, one thing is essentially constant across all these government offices – their budgets are paid for by constituents and taxpayers. All of these taxpayers want accountability, but in order for that to happen, technology and advocacy communities need to come together to make it easier for elected officials to effectively manage websites, constituents responses, and standardize the process, so that in 10 years, this is a problem that we’re on the way to managing, not a growing problem that is becoming harder and harder to reverse. Unfortunately in order to do that, we probably need some sort of contracting reform at all levels of government.

Constituent verification is extremely difficult with social networks — feedback loops continue to get more complicated

Typically, elected officials will only respond to their own constituents, this is somewhat due to the fact that they don’t give a shit about someone unless that person can vote them out of office (they would probably argue that they also care about someone if they can write a huge check to their re-election campaign), but more practically, elected officials limit communication to only constituents because they and their staff have so many conversations to manage, so the easiest way to ensure that they can actually follow-up with constituents is to arbitrarily say that they won’t follow-up with people outside their district.

When people communicate with elected officials via social media and  3rd party advocacy apps, most elected officials only respond if they really have to. They also probably rarely know about a lot of the complaints and criticism they receive on these platforms, unless they have a great team of staffers monitoring every page on the internet (sounds like a job for Carlos Danger!).

Also, based on the surveys done by the Congressional Management Foundation, most offices in Congress take weeks to reply to constituents – some local offices are probably better than that, but it’s still a slow process. Most of these responses come from people who directly emailed, called or wrote a letter to the office. But what happens when someone signs a petition and it’s just hand-delivered? Or what happens when someone joins a Twitter campaign or posts a question on an elected officials Facebook page? The vast majority of these responses are ignored because there isn’t an easy way to sync them into one centralized system and flag them as needing follow-up. Furthermore, elected officials can’t easily verify whether a commenter on Facebook or someone on Twitter is actually their constituent, so currently, those types of communication are essentially ignored, which leaves people feeling like elected officials just aren’t listening. Elected officials aren’t ignoring them per say, but they are merely being overwhelmed with the enormous amount of places that someone could be digitally shouting for feedback.

So instead of online activists trying to figure out more ways to throw shit on the proverbial communication wall, they should be trying to figure out better ways to deliver feedback to members of Congress, and other elected officials, in a standardized format, and within a platform that provides the flexibility to adapt to 21st century technologies.

Constituent Relations is only as good as the CRM managing it — new constituent management platforms need to be developed and empowered from city councils all the way to Congress

Most members of Congress manage their constituent relations with a similar CRM tool – basically it allows them to classify messages under certain categories, flag constituents for follow-up, manage staff priorities and better understand the direct feedback their office is receiving. But the problem is that this tool is totally private – none of the data is flowing back out publicly, and the only time the public knows that X people called the elected officials office to complain about Y issue is when the office decides to release that information publicly.

Furthermore, the tools that most offices use to manage constituent relations were not built to be open, and they weren’t built with 3rd party developers in mind. Image how much easier it would be if there was a data standard for contacting elected officials, and any 3rd party app that was trying to conduct online advocacy could implement the standard to ensure that feedback through their app got to elected officials in a secure, standardized way to ensure that their app users would actually get some sort of response from an elected official.

In 5 or 10 years, it should ideally be easy for developers to create applications that not only let people speak with elected officials from all levels of the government, but do-so in a way that ensures the elected officials are actually being engaged in a productive discussion, and not just getting digitally spammed.

Furthermore, whatever data standard or CRM platform is eventually pushed upon more elected officials should also make it easier to provide transparency – elected officials should be able to turn on public portals so people could see how many constituents asked about X issue on any given day, or how many messages the offices has been sending out. These reports don’t have to get too detailed, but the public should have a better sense for what’s going on with digital communications inside offices – and that type of transparency would go along way towards increasing the trust people have for their elected officials, and also help people better understand how staff members spend the majority of their time.

In conclusion, this two-sided formula for government transparency is not going to be solved overnight – but it seems like more and more efforts are being directed to one side of the equation (engagement with elected officials) while not working on the other side of the equation (better CRM’s and tools for elected officials). Hopefully in the future we start to see this balance out.

Newspapers innovating with API’s the way you’d expect newspapers to innovate with API’s

GannettMap

Metered-digital paywalls, restrictive Terms of Use, and data-limited API’s present problems for newspapers trying to enter the 21st century

If you had a time machine and went back to the early 1990’s, I bet you could sit in a conference room with newspaper executives and hear them talk about the Internet as though it were merely a series of tubes — something that a dump truck could quite literally get stuck in — and nothing they should concern themselves with.

Fast forward to 2013 and most newspaper executives have come around to the fact that their industry is hemorrhaging readers and burning through money faster than people can use newspapers to start campfires (I believe kindling for camping trips has become one of their big selling points). In short, the Internet killed the newspaper star.

But there has been a glimmer of hope on the horizon for some of the papers that are trying to offset the losses from the dramatic decrease in print distribution and advertising — the metered digital paywall — which has successfully increased digital profits by requiring someone to purchase a subscription after viewing X articles per month. This type of system helps to prevent huge drop-offs in digital advertising revenue by ensuring that ads will still be shown to new visitors who organically found a store, while also encouraging new digital subscriptions.

So that’s the end of the story, right? Metered digital paywalls + digital advertising + physical delivery + print advertising = 100 more years of the newspaper golden age!

Not quite … newspapers are still trying to figure out how to actually build a 21st century product. Last year, there were four big publishers who had API’s for their newspaper content: The Guardian, The New York Times, USA Today, and NPR. Today, there are a couple more papers with API’s, including the Washington Post’s nascent efforts, and the sophisticated Zeit Online API archive, which is unfortunately only available for non-commercial use.

And why do newspapers need to build an API? The main reason is that API’s spur innovation, experimentation and they empower 3rd party developers to build apps on top of existing data. For a newspaper, an API means that someone could actually build something out of newspaper articles, or test new designs, or new ways to read and manage newspaper articles, or explore the big data world of a newspaper archive. API’s present newspapers with a hope of being able to cross the bridge into the 21st century, but they also lead to a series of problems due to the current metered digital paywall strategies, and restrictive Terms of Use.

A newspaper API is useless if it’s hampered by Terms of Use and paywall restrictions

Less than 30 people have watched this YouTube video from August 2013 featuring Erik Bursch, the Director of IT Operations and Content Systems for USA Today, and it explains everything that is wrong with the newspaper industry. First, probably the most important news from the video comes around the 17-minute mark — Gannett papers are developing an API for ALL of their newspapers using Mashery, which could be as many as 100 papers opening up data. Erik stated:

“We’ve really come full circle since 2010, you know with the Terms of Use change speaks loudly to what the perception changes inside of Gannett and USA Today. And then at that point, Gannett has looked at what the USA Today API has done for them, done for us, excuse me, and is now replicating that app to all the Gannett properties, which is in development right now. So Gannett and USA today will have that same API layer across all properties, which will be a huge win all around for us.”

In the video, Erik also goes into great detail about how USA Today has worked to be “the first and the best” and how they have iteratively developed their API based on developer and licensee feedback. He also spoke about how USA Today has three or four meetings a week debating how to provide the API data, and they came up with two really important conclusions:

1.) USA Today made a rather revolutionary change to their API Terms of Use to allow commercial use of their data, which brings them up to par with The Guardian. This means that a 3rd party developer can actually build an app that generates a source of revenue and not be in violation of the API Terms of Use. This is huge — revolutionary, especially since Gannett is using USA Today as their model for their big newspaper API expansion. That being said, one big difference currently between USA Today Terms of Use and the Guardian Terms of Use is that the Guardian has a clause that is very smart—and is part of the framework for a Newspaper Article API + Advertising Network that would require someone to, “display on Your Website any advertisement that we supply to you with the OP Content. The position, form and size of any such advertisement must be retained as embedded in the OP Content.”

Unfortunately, the New York Times and NPR still have non-commercial clauses on their newspaper API Terms of Use, which dramatically hurts their ability to offer a developer-friendly API.

NPR is doing fantastic work with their API archive going back to 1995, which includes a cutting edge transcript API for their radio shows, butunfortunately their non-commercial Terms of Use restrictions and a couple dozen other restrictions make it much less likely that people would want to develop innovative apps on top of their API. The New York Times has 14 API’s currently available, but their app gallery only lists 32 apps developed from the data, and if you actually click through the list of apps, it’s a bunch of half-baked apps, literally dead links and websites that are available to be purchased — all in all a pathetic excuse for an API gallery, and a resounding rejection of their API strategy.

2.) USA Today and Gannett can’t figure out how to build an open API within their existing metered paywall structure — This is the saddest news to date within the newspaper API debate — due to the state of the industry and their new reliance on metered paywalls, it’s nearly impossible to find a model where they can actually fully open up the Article API data. Developers can get headlines, excerpts and link backs to the original article, but they can’t get the full text, which dramatically limits how someone could use the data. It’s like opening up the largest and nicest golf driving range in the world and then telling everyone they can only use putters.

So what does this all mean? Essentially, Gannett is currently developing the largest newspaper API the world has ever seen. They are breaking down barriers and have relented to let developers use their data for commercial use. But they also appear to not have a solution for how to ACTUALLY open up their data within their existing metered paywall structure. And it also appears they aren’t taking the lead from The Guardian by building out an advertising network tied to their Newspaper Article API, even though both companies use the fantastic API company Mashery.

A good analogy would be that developers are now like that kid who comes down for Christmas to a room full of presents, only to find out that their terrible parents put bike locks around all the presents and swallowed the keys and are forcing the kids to just stare at the presents with big locks around them. Okay that’s a bad analogy, but you get the picture — an API needs to be completely open and available for commercial use in order to spur innovation, otherwise it’s just newspapers using API’s exactly the way you’d expect newspapers to use API’s.

So what are newspapers to do? What are some ways they could innovate? Good question and it’s something I wish that Gannett and other papers would start discussing more openly. There should be hundreds of people engaging in this dialog about what could be valuable within a newspaper archive and article API, along with other data they could hold, and how the entire ecosystem be opened up while still generating profits for the paper.

Newspaper API ideas should come from more than one person, but here are a few

1.) Combine the Article API with an ad network in order to serve up ads on 3rd party websites and in 3rd party apps — The Guardian is already planning/doing this and it seems like the logical way forward. In fact, newspapers building out their API’s would be smart to model a lot of their work off of The Guardian’s API, especially the breakdown of their Content API Terms and Conditions. This model creates a distributed advertising network, provides new opportunities for niche advertising, and puts newspapers back on track to generate more of their money from digital ads.

2.) Breakdown the API access to categories and don’t allow a single domain/app to access more than one API feed. So essentially you could facilitate sports apps, or politics apps, or local news apps — you wouldn’t be giving away the whole enchilada to one organization, but you also would be opening up the full articles/archives so developers could actually do something with them.

3.) Build out a Newspaper Archive API with full-text articles and commercial access and just limit it to stories older than X months/years. This would limit problems with licensing clients while also opening up possibilities for really innovative apps. And with this type of system, merely integrate an Ad Network into the API feed in order to monetize it.

4.) Take niche events like the 2016 Presidential Election and open up articles just around that topic. So essentially create an Election 2016 API that would provide 3rd party developers with a platform and time to think and build out a wide range of innovative apps. Someone could do that for a wide range of niche categories, which could work for a large network like Gannett.

5.) Provide a platform within a paper to compile local content through a “Community Collaboration API.” Newspapers used to be the central place a community would look for news, but with the advent of bloggers, craigslist, independent news organizations, and a host of websites, it’s becoming harder and harder to find one central place for community news. Newspapers could develop a “Community Collaboration API” and build out services/plugins for the major blogging platforms like WordPress, Drupal, Wix, etc, so that bloggers could push their content to a larger audience housed on the newspaper website. The content could be categorized and organized, and newspapers could become the “Drudge Report” of local link farming, but focused on niche local blogs and issues.

6.) It needs to be easier to create API’s and people need to be better educated about how a data mashup can reveal trends and facilitate data-driven decision making. A great company working on this is Mashape.com, and one of their funders is the new Washington Post owner Jeff Bezos. Perhaps we’ll see something from WaPoLabs that will make API management and data-mashups between papers, bloggers and archives easier, but we’ll have to wait and see.

Finally, I’m just one guy ranting about what would be nice to see in a Newspaper API and developer environment– but there are hundreds if not thousands of people more qualified to talk about this subject. News organizations like Gannett need to open up discussions when they are developing their API, not AFTER they have already developed the API. The developer community shouldn’t be finding out about the largest newspaper API in the world through an errant comment on a YouTube video. It’s time to start talking honestly about how newspapers can prosper in the 21st century, how they can encourage innovation, and how developers and news organizations can work together to better inform, educate and entertain the public.

19k Digitized Iowa Campaign Finance Records

ScreenSnapz003

Searching for campaign finance records on the Iowa Ethics and Campaign Disclosure board website has always been a nightmare — files are stored in PDF format and aren’t machine readable, so content is essentially siloed in an outdated system. So in February 2011, I decided to download 19k records and use Scribd to digitize and make the campaign finance records searchable. While the data is outdated now, I felt like it showed how easy it was for someone to use a little 21st century technology to improve a system built for a bygone era of campaign finance records submitted via paper documents.

You can view and search the 19k documents on Scribd here.