Cities are important urban areas where many of us live, work, study and raise families. There’s a push to make cities smarter.
Why? A smart city is a connected city – a place where Anna can get around easily because she can see her bus and train schedules in real time. A smart city is effective, integrated and innovative – a city that embraces Gina’s startup and makes it easy for Ali to find the best school for his daughter.
A smart city uses open data.
Open data is knowledge for everyone; it’s information that can be shared with anyone for any purpose without restriction. We aren’t talking sensitive or personal information, we’re talking about the data that drives decisions and is the lifeblood of a city’s civic landscape.
Open data helps cities connect people and organisations, share information, create new tools, products and services. Using open data well means smarter cities. This series covers everything you need to know about open data for smarter cities. First, let’s get you started.
Where do I start with open data for my city?
It’s hard to know where to start with open data when you’re a city. With many moving parts, stakeholders and a huge wish list, getting started can be daunting. Here’s the key things to consider:
Start with why: Have some idea of why you’ll need open during ata and how you’ll measure the success of your initiative. It doesn’t have to be perfect at the start but it will keep you on track.
Think about delivery: You’ll need a way to deliver open data to the people and organisations that will use it. There are several platforms and tools available. The right one will play nicely with your existing platforms. Remember, don’t reinvent the wheel!
Find your audience: Get to know who could use you open data so you can work out their needs. Check FOIA requests – that tells you what people want to know!
Work with your community: You’ll need a community engaged around data, including developers, citizens & businesses – back to those FOIA requests and materials you already publish. What are people interested in? Who are these people?
Make sure your open data all plays well together: Open data doesn’t have to be perfect. Start where you are and keep improving. Think about what’s needed to connect data together from the start: how would you connect data on parks to data on air quality.
Think value, value, value: Think about the benefit of sharing and connecting data. Local government departments will probably be biggest user and benefit the most from connected open data, so keep them onside.
What questions can we ask? Will this data help solve our problem? Can we use this algorithm or that one?
Welcome to data wrangling 101. Exploring our data before we dive in and start playing with it or reshaping it means more productive data science or data analysis. If you’re lucky, you know enough about the domain to understand the quirks a dataset throws your way or you have someone to badger. On your own with an unfamiliar dataset? That happens too. So here’s 3 lessons from wrangling the Arts Council England 2018-2022 national portfolio dataset.
Arts Council England is a public body supporting arts and culture in England. It is funded by public funds from the UK government and the National Lottery. Between 2015 and 2018, it will invest £1.8 billion in arts, museums and libraries. The funds will support art and culture experiences including theatre, digital art, reading, dance, music, literature, crafts and collections.
Why on earth are we interested in the national portfolio dataset?
The National Portfolio programme supports organisations considered by Arts Council England to represent the best of global arts practice. Funding is given over multiple years, currently 3. Between 2015 and 2018, £1 billion will be invested in 663 organisations.
That’s a lot of money and lot of prestige! I’m still exploring the dataset but here’s what I’ve learned so far.
Lesson 1: Test your assumptions
My first assumption was a bust. One thing it’s usefuk to know is “Which fields make the data unique?”. This helps us report on stuff like “How many grants were issued by the Arts Council?” and “To how many organisations?”. It was easy to jump in at first glance and say the organisation’s name, the Applicant Name. Unfortunately, an organisation can be awarded under multiple funds.
Ah OK, so maybe Applicant Name and the type of fund, the Funding Band? At first that worked great but then 1 rogue entry popped up… It turns out that most of the time, an organisation gets 1 grant, sometimes 2 but Tyne & Wear Archives & Museums got 3!
The upshot? Test your assumptions. This might be an anomaly or it might be legitimate. We can’t always tell, so we’re going to have to ask.
Lesson 2: Don’t be afraid to ask
📢 Data isn’t a perfect reflection of the real world.
When we collect, share or use data, we curate it. We make decisions about what and how much detail to include. We can’t assume that data is perfect, so sometimes we have to ask the hard questions like “Why was Tyne & Wear Archives & Museums awarded 3 grants?”
Other oddities cropped up in the data that needed that human touch. Arts Council England share a lot of geographic information. Check out what you can find:
They’re all slightly different. Some are clearly internal like ACE Region and others are official geographies like ONS Region. But what about Area? I was stumped, so I asked the very friendly Arts Council England support team.
Thanks! In your national portfolio dataset 2018-22 there's a column called Area. I wasn't sure if it's from the ONS or internal codes
The upshot? Don’t be afraid to ask. Making assumptions can come back to bite you. If you can, ask someone who knows so you understand their design choices. You don’t have to do this for every single column, focus on the ones that are most likely to solve your problem. You can also come back as you iterate. Remember, it’s a cycle.
Lesson 3: Remember it’s a cycle
There are a few methodologies, good practices and guidelines that help you punch through the worst bits of data wrangling so you can get to the good bits. You might be data mining or predicting or deep learning. No matter your intended application, you’ll most likely be iterating – going around in a cycle of try, test, understand till you have a good enough answer.
When you first start working with data it can seem overwhelming. Remembering it’s a cycle will keep you sane. You might miss things the first time, that’s OK. That’s why we test and iterate.
I started exploring the the Arts Council England 2018-2022 national portfolio dataset to answer a friend’s question and then to streamline my practice. Along the way I made assumptions, backtracked, tried data visualisations that didn’t work and rolled my eyes – a lot. Each iteration, I learned something new and useful about the story of national portfolio funding for the next 3 years. I hope you have too.
When we talk about coverage or completeness, we want to know a couple of things. First, what’s there? and second, what’s missing? We want to survey the land and get a short but complete overview. How do we do this? We look at our data from more than one angle.
A map is not the territory…
Data is a tool
It represents something we’re interested in. That thing could be cars, loans, flowers, or cups. Whatever it is, we want to record or review information about it. Knowing can help us sell the right cars, guide our clients to the right loans, report on the state of the flower industry or manufacture more instragrammable cups.
Data describes concepts
It represents ideas we’re sharing. There are many styles and shapes of cups in the world, but the icon of a cup is pretty much universally understood. I may not know the style or shape of your cup but I understand “cup-ness”.
How does this help us understand completeness?
Let’s take a step back. We’re unlikely to be interested in every cup that ever existed, so we have a scope. Let’s say we’re interested in cups we make and sell. Our universe of cups is limited to just those cups.
We want to know a few things about our cups: the materials we used, how large or small they are, that sort of thing. So we decide on headings or columns for each of the attributes (information about cups) that we’re interested in.
This list of things about cups is the schema. It’s a template that describes what we want to know about cups. It isn’t our data on cups (we’ll add that under the headings) but it gives us some direction about what to record.
Unlike the concept of a cup, the schema of a cup isn’t intuitive. We’d struggle to instantly recognise “cup-ness” by looking over this list. We’ve taken reality, abstracted it to a concept then made that into a schema which is the container for our data.
So back to completeness. When we talk about completeness, we could be talking about the concept or the schema. These are different questions but together gives us insight into the state of our cup data.
Concept – How many cups are we reporting?
Schema – How many cup attributes are we reporting?
Concepts & Schemas: How are they different?
In general, when we talk about a concept of a cup, we have a list of information we need to understand “cup-ness”. So we may agree it’s not a “cup” unless we have these things: cup#, name and type. That’s close enough to our concept of a cup that we can ask questions about the number of cups. This is the sort of information we use to plan campaigns, make strategic decisions and launch new cups to the market.
In reality, we don’t record everything diligently. We miss things out for a host of reasons. This is even more obvious when we aren’t recording the data ourselves.
Data has gaps
Understanding where those gaps are is important. Gaps affect how we report on concepts. If we’re missing cup names, that reduces the number of cups we report. We use information about gaps to improve our data collection so that we can make better strategic and planning decisions.
The upshot? To understand how complete our data is, we survey our data landscape in two ways: by concepts and by schema. We can count conceptual cups or count cup attributes to find what’s there and what’s missing. The two strands help us understand what’s going on in our data.
Some things (or attributes) are more important than others, they map to concepts;
Some things are conceptual (“cup-ness”) others are schematic (the cup attributes);
Some things are more useful for planning and strategy (concept) and others for improving data quality (schema).
Edafe Onerhime is a consultant on Data Science and Data Analysis who has over 20 years of experience answering difficult questions about open data. She has helped governments, charities and businesses make better decisions and build stronger relationships by understanding, using and sharing their data. In this episode, we discuss the history of open data, its importance in building communities and its similarities to open source and open science.
How can we use visualisation as a tool for collaboration? Insight is best when shared; when every stakeholder not only understands the end result, they’re informed about the context and impact. In a nutshell, they understand “What does this mean?”.
This is a proposal I submitted to Joining The Dots, a symposium to share data visualisation knowledge and techniques.
Visualisation is communication. Making communication clear, concise and unambiguous promotes collaboration and discussion around complex and nuanced data. In my talk, I cover visualising data to promote collaboration and as an antidote to reams of text.
I focus on examples such as visualising metadata like the 360Giving data standard to help adopters understand how the standard fits together and the story they can tell with their data.
You want people to use your data. They want confidence that they can trust your data and rely on it, now and in the future. A good open data policy can help with that.
An open data policy sets out your commitment to your open data ecosystem. It should detail how you will collect, process, publish and share data. It will set expectations for anyone using your open data and if you stick to it, lead to confidence about what to expect.
How do I make my open data as useful as possible? How do I connect it with other data to boost insight? How do I answer really tough questions with open data? Make it play well with other data – make it interoperable.
(ComputerScience) of or relating to theability to sharedatabetweendifferentcomputersystems,esp on differentmachines:interoperablenetworkmanagementsystems.
Why should you care about this?
If you want your open data to help answer questions, solve problems, boost the economy by fuelling innovation or used in research, you need to go beyond names and places.
Do these mean the same company?
How about now?
GB-COH-123456: ACME Limited
Bit more confident? You can take that code 123456* and find the company on Companies House (Hint, that’s what the GB-COH- tells the machine using your open data!). Go you, you’ve just opened up a whole new world of information! This example is using a shared standard way of talking about organisations, find out more on org-id.guide.
(* P.S This is just an example, ACME doesn’t really exist!)
You can start to answer question like this:
These codes or Identifiers are a gold mine. Every country has agencies that give codes to businesses, charities, non profits and more. Use those codes where you can.
Can I share codes for anything else?
Of course! You can identify places, things, categories, types and much, much more.
Tip: Make your open data more useful by making it easy to connect with other data.