Skip to main content

Thu, May 29, 2014 9:28 PM

API/Bulk Data Access

Hi!

We’re in the process of reviewing how we make our data available to the outside world with the goal of making it easier for anyone to innovate and answer interesting questions with the data. If you use our current ftp solution to get data [http://www.imdb.com/interfaces] or are thinking about it, we’d love to get your feedback on the current process for accessing data and what we could do to make it easier for you to use in the future. We have some specific questions below, but would be just as happy hearing about how you access and use IMDb data to make a better overall experience.

1. What works/doesn’t work for you with the current model?
2. Do you access entertainment data from other sources in addition to IMDb?
3. Would a single large data set with primary keys be more or less useful to you than the current access model? Why?
4. Would an API that provides access to IMDb data be more or less useful to you than the current access model? Why?
5. Does how you plan on using the data impact how you want to have it delivered?
6. Is JSON format sufficient for your use cases (current or future) or would additional format options be useful? Why?
7. Are our T&Cs easy for you to understand and follow?


Thanks for your time and feedback!

Regards,

Aaron
IMDb.com

Responses

2 Messages

 • 

92 Points

6 years ago

This is sweet music to my ears. At least half a dozen times I have sat down determined to write a modern parser for the IMDb text data in .NET and ultimately given up because (a) the data is infuriatingly hard to work with and (b) it doesn't provide any way of interacting with personalised data such as my watchlist and ratings. As you must be aware there are several unofficial APIs to your data, all of which are either poor quality, unreliable or incomplete. Competitors like the Open Movie Database and Rotten Tomatoes APIs simply can't compete in terms of data quality or completeness, so a modern IMDb API would be a dream come true and I would start using it from launch day.

1. What works/doesn’t work for you with the current model?

Quite frankly the current model is horrendous. The raw data is presented in a format so proprietary and unstructured it's almost as if it was done deliberately to discourage uptake. I appreciate that probably isn't the case and it's just very old. I got relatively far with a parser once but found I was hitting error after error whilst hunting for specific text patterns rather than parsing a known format.

More importantly though, I have no need for or interest in locally storing the entire database. If you were to start presenting the data in say, XML or JSON as flat files I would most definitely consider using it, but I would MUCH prefer an API that allowed me to retrieve individual records or sets of records rather than parsing an enormous volume of data from which I might only ever use a tiny fraction.

2. Do you access entertainment data from other sources in addition to IMDb?

I've investigated using the Rotten Tomatoes API, the Open Movie Database and various unofficial IMDb APIs but none of them really do what I want.

3. Would a single large data set with primary keys be more or less useful to you than the current access model? Why?

Yes, because it would be easier to construct a usable database from. Any modern application consuming this data is going to want to construct a relational or NoSQL database from the data, for which primary keys are essential. However, a single large dataset would be far less useful to me than a random access API.

4. Would an API that provides access to IMDb data be more or less useful to you than the current access model? Why?

MUCH more useful. Having a complete copy of the database locally may have it's advantages for some applications but it would be so large that it wouldn't make much sense on a mobile app for example, and would hence necessitate the construction of an intermediate database and web service API constructed from the dumped data files, which would be less reliable and less current than an API provided directly by IMDb. Most apps (certainly the kind I have in mind) would want to perform searches and maybe cache a small number of records locally for performance, but have no need for fully offline access.

5. Does how you plan on using the data impact how you want to have it delivered?

Yes. The use case I have in mind right now means I would want to be able to access a user's watchlist and ratings (with their authorisation of course using OAuth or similar) and access key data about the movies that user has expressly registered their interest in through the watchlist and ratings features. That use case would be ruled out by a static data dump.

6. Is JSON format sufficient for your use cases (current or future) or would additional format options be useful? Why?

JSON would be my first choice of data format, although I'd argue that a choice of JSON or XML is best practice for a modern web API design. Ideally the API would adhere to REST best practice and employ content negotiation for format selection.

7. Are our T&Cs easy for you to understand and follow?

Yes, no issues in that regard.

1 Message

 • 

60 Points

6 years ago

I didn't know that the data was even available -- my own use for it would be to hook up to my off-line dictionary; the potential feature is: "type in a word, and see it in the context of movie tag lines and plots".

1. What works/doesn’t work for you with the current model?
Answer: it's a slightly wierd-looking format, but nothing that looks too hard to parse.

2. Do you access entertainment data from other sources in addition to IMDb?
Answer: don't know; I have only a potential feature

3. Would a single large data set with primary keys be more or less useful to you than the current access model? Why?
Answer: no

4. Would an API that provides access to IMDb data be more or less useful to you than the current access model? Why?
Answer: absolutely not.  My dictionary program is strictly off-line for speed.  I would only be interested in bulk data

5. Does how you plan on using the data impact how you want to have it delivered?
Answer: no really; I would probably simply bundle up the appropriate text files into my app

6. Is JSON format sufficient for your use cases (current or future) or would additional format options be useful? Why?
Answer: JSON would be superior but not essential.

7. Are our T&Cs easy for you to understand and follow?
Answer; no, they are not.  There's a set on the T&C page that don't really match the conditions in the files themselves.  Also as an FYI: the "email us for T&C" technically only applies if I want express written consent.  If I want to use implied consent, then I don't have to email you for alternative terms.  But the terms for implied consent are pretty vague.

1 Message

 • 

60 Points

6 years ago

I would really love an SQLite database file. For simple use cases, this would be enough already. For more advanced use cases, creating a script that moves/syncs the data to a "real" database becomes a lot easier because the full relational schema is already there.

2 Messages

 • 

70 Points

6 years ago

There is some good feedback already and while I have interest in the IMDB data I am not currently using it and don't have any specifics in mind.

The best thing you can do is provide a documented REST API with a STRONGLY documented data structure. Be clear with this part and everything else kind of falls into place.

If you want to provide data via FTP, why not have an option for full/incremental database backups? It doesn't have to be a dump of your full DB just the fields you want to provide. Then anyone who wants to use it can either load and translate as needed or just restore the backup and use the DB as is. Maybe consider a data warehouse type export? This would make sites that want to use your data be able to have fast read/report style access (since I would assume modifying it is not super useful).

Definitely include primary keys! If you have lots of surrogate keys (good) make sure there is some kind of unique identifier for each record.

If I was to build an application making use of the data I would prefer a readonly model that I could refresh from your source as needed. If I wanted to store additional information with the IMDB data I would have a metadata database/table structure that could be referenced via the IMDB PK fields. This way I can refresh that data at will and not lose any of my additional data.

1 Message

 • 

60 Points

6 years ago

Dear Aaron and IMDb staff,

I am a researcher at University of Verona, Italy.
With two collegues from CEA - Saclay in Paris we are doing a scientific research based on IMDb data, with the aim of identifying the determinants of movie success and their evolution in time.

To this purpose, we need to revert the database to some previous state in time, but unfortunately we noticed that some of the diff files are missing or empty (list below).
Is there any way to recover/obtain them?

I really appreciate your help on this matter, that is so important to carry out successfully our research on the movie industry.
Moreover this could be an occasion to fix the consistency of a database that, apart from the due improvement discussed in this page, is actually great to have publicly available.

I look forward to hearing from you.
Best regards,

Paolo Sgrignoli

---
Missing diff files:
diffs-140207.tar.gz
diffs-140131.tar.gz
diffs-130621.tar.gz
diffs-090313.tar.gz
diffs-060707.tar.gz
diffs-060630.tar.gz
diffs-060623.tar.gz
diffs-060616.tar.gz
diffs-050422.tar.gz
diffs-041022.tar.gz
diffs-040625.tar.gz
diffs-030516.tar.gz
diffs-030103.tar.gz
diffs-021213.tar.gz
diffs-000609.tar.gz
diffs-000602.tar.gz
diffs-981113.tar.gz
diffs-981106.tar.gz

Empty diff files:
diffs-140613.tar.gz
diffs-140214.tar.gz
diffs-100514.tar.gz
diffs-100507.tar.gz
diffs-050819.tar.gz
diffs-050812.tar.gz
diffs-050506.tar.gz

3 Messages

 • 

390 Points

Paolo,

Thanks for the feedback. We no longer keep historical diffs, so would be unable to provide.

Regards,

Aaron

1 Message

 • 

62 Points

6 years ago

It would be best if the database is designed in a way it can be easily -
1 Maintained
2 Modified 
3 Distributed
4 Accessed

The first three things are out of my expertise to answer, however for having to Access all the data in the best way, you can have

Relational Databases with primary key and foreign keys
SAS or Access databsets would be good
All of the data should be accessible by using joins
For eg;

Movies table will say have

Movie | Year | Length | Rating | No of Ratings | Cost | Earnings | No of weeks in theaters | Awards |etc

And actors table can have

Actor | Movie/ (TV Series) | Character name | rating | Awards

Actor 2 table can have

Actor | Age | Sex | No of movies | No of TV shows | No of Awards

Now all of the data can be accessed by joins. This structure avoids redundancy, so reduces space consumption.

2 Messages

 • 

70 Points

6 years ago

I am very thankful that IMDB makes their data available in bulk format.  I'm a big fan of film and have had fun playing around with the IMDB data.

That being said, the format is really terrible to work with.  When I first started using it I figured that it must actually be formatted that way to discourage people from using it.  Creating a regular expression to parse even just the title is quite tricky with so many optional fields and different delimiters which may or may not be present.

Some here have recommended delimited files, but I think that would be a mistake.  While delimited files are perfect for tabular data, many of the files do not contain tabular data but instead blocks of data.  (Such as a set of AKA titles for each movie title)  A much superior solution, which would be usable by everyone very easily, would be to use JSON.  A JSON file can easily include additional fields for some records and fewer for others.  There are also robust, fast parsers for pretty much every language and platform out there.  A big benefit to IMDB would be that changing the format would be very easy.  If you wanted to include a new bit of information for some records, the data could be added without disrupting any existing parsers.  It would just be a new field that they aren't looking for anyway.

As some have mentioned, having a sort of "primary key" to cross-relate the files would be extremely useful.  IMDB already has such a key, of course, in the ID used in the sites URLs.  I imagine that bit of information might not be provided in the files in order to discourage screen scraping, but if so that's not very effective.  It just makes people looking to scrape info from your site hit your search engine before grabbing the movies page. 

I find your Terms of Service very reasonable.  I've never wanted to do anything that conflicts with them for my own playing around. 

Whatever changes are made, I sincerely hope that you do not do away with the ability to get the data in bulk.  Many APIs are designed such that they just service specific requests, but for the things I do at least (such as analyzing the differences between the ratings assigned to movies in different countries) such would be useless.  My worst-case-scenario would be IMDB developing an API, doing away with the bulk data download, and limiting the number of requests per day.  For applications that just let people see information about movies on a personal wishlist or watching for new productions from a favorite director or the like that would be sufficient, but for any statistical analysis it'd be useless.

1 Message

 • 

60 Points

6 years ago

1. What works/doesn’t work for you with the current model?
Parsing the 50 files is cumbersome and error-prone due to different formats and errors in the data. It's also a pain to recombine the information into the separate entities persons and movies.

Having the data is great though.

2. Do you access entertainment data from other sources in addition to IMDb?
No.

3. Would a single large data set with primary keys be more or less useful to you than the current access model? Why?
See my answer to 1. Merging the information from separate files is annoying because there are about half a dozen different file formats. Having just one big data set would make all of these problems go away.

4. Would an API that provides access to IMDb data be more or less useful to you than the current access model? Why?
No, I prefer having the data in one download. APIs bring usage restrictions and force me to use it in a very specific and as such limited way and I want to be free in how I query the data.

5. Does how you plan on using the data impact how you want to have it delivered?
Kind of. Like I said in answer 4, having a data set I can download is preferable to an API. I want to be able to run queries and combine the IMDb data with other data sets.

6. Is JSON format sufficient for your use cases (current or future) or would additional format options be useful? Why?
JSON would be completely sufficient.

7. Are our T&Cs easy for you to understand and follow?
Yes.

1 Message

 • 

60 Points

6 years ago

Hello IMDb Team,

at first, I'd like to thank you very much for making tha data available at all! I really apprecciate it, because i know how much work it is to create and maintain such a project.

But as you already know, the current data format is really hard to handle for most uses. I am IT developer and entrepreneuer, creating and running databases with hundrets of millions of entries. From my point of view, your kind of data is best suited for use within SQL databases, containing primary keys and foreign key relations. That way, a SQL dump would be awesome for anyone who uses SQL for querying the data.

But let me first answer your questions:

1. What works/doesn’t work for you with the current model?
A: The current model is not uniform, not scalable, it's hard to build an index on top and is therefore killing performance. Complex searches accross different fields are not possible, simply because there are no (delimited) fields. To use this data in a convenient manner, I have to convert it to a standard database format that allows to join datasets using primary and foreign key relations and to perform logarithmic scaled searches on B-tree indexes for best performance. But even a simple convertion to SQL is not made easy. Even more problematic: Information is not only stored within in the data itself, but also within the data structure, which is, at least, an absolute no-go when dealing with data structures. NEVER store information within your structure!

2. Do you access entertainment data from other sources in addition to IMDb?
A: I consider to do so, yes. But this requires a clear type-to-subdata-attribution. That means that there should be a strong relationship model between a data entry and it's object entity. So, a name should be a film name OR a series' episode name, but not both. However, I know that changing a data model is not that easy as to just define a different data format. I guess we have to live with that.

3. Would a single large data set with primary keys be more or less useful to you than the current access model? Why?
A: I'm not really sure if I understand this question. Primary keys are most relevant, yes, but "a single large dataset"? A single SQL dump would be okay, becaus it contains several tables.

4. Would an API that provides access to IMDb data be more or less useful to you than the current access model? Why?
A: Less useful, because an api offers only a limited capability of querying the data. And it decreases performance on large-scale-queries. And it's OS and language (code) dependent.

5. Does how you plan on using the data impact how you want to have it delivered?
A: It depends. ;-) I do always transform all source data in an overall standard structure like CSV, SQL, RDF OR XML. This standard structure can then be optimized for the way it will be accessed. Therefore, it would -of course- be nice to have this data already stored in such a standard format. But I don't really bother which standard format you choose. I cannot think of any situation, in that anybody profits from the data stored in a nonstandard format like it currently is. However, some projects may require nonstandard formats, but the probability it matches a nonstandard format like yours, is rather low.

6. Is JSON format sufficient for your use cases (current or future) or would additional format options be useful? Why?
A: It would be still much better than the current format, because it can be easily loaded and converted to SQL and so on. But: JSON also lacks containing an explicit format declaration. SQL, XML and RDF would. And JSON tends to reach a format complexity in its depth that cannot be easily guessed. Positively, there are a lot of JSON-Parsers that make it easy to load the data, charset is UTF-8 over all, so it's best suited to "gather" all relevant information. But not to "find" data using complex searches. JSON is optimal for websites using Javascript/ECMA and Ajax to asychronously query data from the Webserver, it would be okay to serve data that is small enough to fit into memory and this a thousand times at once. JSON is quite useful for data exchange, when the data format is well-known, but not for database dumps. In any case, users need a way to query data without loading large datafiles into their memory. For JSON, you'd need to load all the data or you transfer it to another format. But: when you need a conversion anyway, why not offering the data already in the target format?

7. Are our T&Cs easy for you to understand and follow?
A: Yes, so long. But I might contact you next year to clearify some special issues. You probably should think about updating your T&Cs for web2.0 usecases. We are currently working (internal) on web3.0 features. One major problem is that a lot of old T&Cs are rarely web2.0 compliant, so not web3.0 compliant at all.

----

I've got some questions, too. Are the flat files really the ones you're manually working with, e.g. adding and modifying data? Or do you already use a database for it? If yes, which kind? At least your web platform will make use of a database for performance reasons. So why don't you offer the data in just the format that you are using yourself?

Nowadays data exchange formats shall be well-defined and standard-compliant. That ensures that all data can be imported into totally different DBMS without interfering the data model itself or even data integrity. While XML is only suitable for small amounts of data, SQL-dumps target only SQL-type databases, of course. The question whether SQL is the best format may be answered by the question if there is anyone who doesn't use SQL. Lets assume the latter, and go on. RDF is a subset ob XML but more complex than simple XML: it introduces the possibility to store and set up a data graph. A data graph is -simply spoken- a set containig two types of elements: nodes and edges. Each node is an object, and each edge a relation between two objects. The difference to standard XML and even SQL is, that it allows to store objects and relations that may not be predefined, a graph can record data without modifying the data model. It would be great for IMDb-like data, but I guess that it's too complicated for simple usuage. So there remain CSV and JSON. To be not language-dependent at all, I would prefer CSV over JSON. Charset UTF-8, predefined field names, field delimiters and field enclosures with c-style escapes. Field names should be unique across different files, each file should contain one dataset per line, and each dataset should contain a numeric primary key. If this primary keys are uniqe across all files, users could still set up a graph on top of the data. Using JSON instead of CSV would be okay, if it still represents one dataset per line. But using one object for all data in JSON has no advantage over standard-csv in my opinion. I don't prefer having all data in one file, except for a SQL dump, which could come in 3 or 4 different files (films, series, actors/actresses, other data). For CSV and JSON it might be better to offer one file for each table.

If i can be of any help, please contact me using my email address imdb -at- abotech.de.

If I fully understand your data model and format, I might be able to provide serverspace and different formats for download and/or access as contribution to your project. Of course, regarding your T&Cs, this would require your explicit pemission first.


Best regards,
Andy

PS: sorry for my bad English :)

1 Message

 • 

60 Points

Hi Andy,

Do you have the available IMDb as RDF files? I'm working on a project and this format would save me a lot of time!

Many thanks for your help in advance,
Fred

2 Messages

 • 

92 Points

6 years ago

Any chance of an update on this from IMDb staff? The question was posted months ago now, and it would be nice to know what if anything you are working on in this area.