<img src="//bat.bing.com/action/0?ti=5439530&amp;Ver=2" height="0" width="0" style="display:none; visibility: hidden;">


OpenWater’s Data Transfer Solution for Video-Heavy Clients

OpenWater’s Data Transfer Solution for Video-Heavy Clients

Data Demands

Video related awards, in particular motion picture and television awards, use an extraordinary amount of data. Transferring 15-20 terabytes of data is time consuming and costly. For reference, one terabyte is equal to about 1,500 CD-ROM discs.

One solution OpenWater offers is coordination of snowballs on behalf of clients. (An Amazon “Snowball” is a gigantic hard drive. It physically migrates petabyte-scale data sets into and out of the cloud.)

Beyond Snowball: Data Center to Data Center Transit

One example is our work with the Peabody Awards.

These preeminent media awards recognize the “best of broadcasting”. The Peabody Awards stream data from more than 1,000 submissions, through multiple rounds of reviews. Submissions come from programs made by local, national, cable, and international producers. The cloud stores all of this film, television, music, and other media. However, the transmission from the cloud to individual reviewers and judges can be problematic.

That’s because this data moves across the internet on a very old internet protocol: FTP. Consider this the “consumer-grade” of internet pipes. Add to this another challenge. Competing internet providers may intentionally slow data transmission from competitors, an impact felt by consumers. (This is the root of the ongoing “net neutrality” debate in Washington.)

As a result, reviewers can experience throttling and delays as the data moves through different internet providers. That’s unacceptable for live judging. Additionally, the experience is not suitable for geographically remote judges or anyone else!

One way to circumvent this situation is to move the data from data center to data center.

The Peabody Awards work with team at Walter J. Brown Media Archives. The archives, at the University of Georgia, are the only public archive devoted solely to the preservation of audiovisual materials.

OpenWater’s team worked with the Walter J. Brown team to create a direct connection between data centers. Now files can go directly from our data center to theirs at a very high speed. We also built a custom tool to check the integrity of transmission and ensure fast, interruption-free viewing. This is our content ingestion network.

How the Peabody Team Benefits from the Direct Connection

Now, all of Peabody’s submissions are archived and confirmed in a matter of days. There is no need to ship a Snowball (a.k.a., “the internet in a box”) to ensure clear data transmission.

Moreover, Peabody enjoys significant cost reduction transferring a high volume of data.

How the “Direct to Data Center” Process Benefits OpenWater Clients

OpenWater has implemented a content ingestion network. This dramatically increases speed for users uploading files from any region in the world. This was something we discussed earlier in the year to help clients navigate around net neutrality slow-downs.

Our content ingestion networks works by sending small chunks of data to many servers. We’ve also refined it. Now, any delays or failed uploads are automatically sent again.

Results include:

  • Faster and more reliable content intake by OpenWater (up to terabytes of data per day)
  • More options for clients to send and receive files

So: if you have large amounts of data, rest assured that OpenWater can handle the load. Like our MARIO initiative the content ingestion network is our way to proactively take action on your behalf.



Photo: Pixabay



Katie is OpenWater's Content Director. Her interests include entrepreneurship, permaculture, and blockchain. She enjoys spicy food and salty words.

Leave a Comment

Your email address will not be published. Required fields are marked *