nēdl: Using Amazon Alexa to Find a Needle in a (Global) Haystack of Sound

small-logo-icon2
Distillery
  • Date Published
  • Categories Blog
  • Reading Time 6-Minute Read

Worldwide, there are more than 100,000 radio stations producing content. Wouldn’t it be fantastic if there were a way to search that content in real time?

Did you know that, according to the Federal Communications Commission’s latest count, there are more than 15,000 AM and FM radio stations currently broadcasting content in the US? Worldwide, there are more than 100,000 radio stations producing content. That means that – at this very moment – there are thousands of different songs playing, hundreds of live sporting events being broadcast, and thousands of different commentators sharing news, entertainment, insight, and interviews.

With today’s technologies, consumers now have access to most of that content, regardless of their physical locations. Wouldn’t it be fantastic if there were a way to search that content in real time, helping you quickly find the live streams featuring the topics and artists you most want to hear? The very cool news is that soon, courtesy of Santa Monica-based nēdl, you’ll be able to do just that. nēdl’s mission is to help you quickly find your “needle in a haystack” amidst the world’s broadcasts – in fact, that’s the inspiration behind their name.

Using speech recognition and the Amazon Web Services (AWS) platform, the nēdl app will help listeners search more than 100,000 live news, sports, talk, and music broadcast streams to quickly find the things they’re most interested in listening to. (Ultimately, nēdl will also let users create their own live stations so that they can inject themselves into the mix.) For internet-search-like abilities across live streams worldwide, you’ll simply be able to ask Alexa.

For the past several months, Distillery has been working with nēdl to build the MVP (minimum viable product) for nēdl’s Amazon Echo application. nēdl CEO and co-founder Ayinde Alakoye met Distillery CEO and founder Andrey Kudievskiy via Stubbs Alderton & Markiles, LLP’s Preccelerator program, in which Alakoye was a participant and Kudievskiy is a mentor. The mentoring relationship quickly took a highly practical, real-world turn when Alakoye and Kudievskiy decided to team together to develop the MVP.

Alakoye is no stranger to the world of radio industry innovation. In 2007, building on his first-hand experience working at radio stations, he created the Clear Channel mobile app that became the incredibly popular iHeartRadio app. In 2010, he created the Hitch Radio app, which integrates social media and instant messaging to create live broadcast radio searchability and shareability. The nēdl app – integrating speech recognition technology to provide fast, accurate, results for users’ searches – is a natural progression of Alakoye’s commitment to making the world of radio as accessible as possible.

Given nēdl’s more limited MVP project scope, only a slim Distillery team has been required thus far. The team includes project manager Tatiana Garionova, full stack developer Alexander Zamaratsky, lead QA engineer Anna Varetsa, and Distillery’s tech lead, Andrey Oleynikov, serving as technical advisor. The team has consistently proven their worth by finding feasible ways to implement nēdl’s visionary ideas.

After Distillery helped nēdl set up the infrastructure needed, they worked on setting up the backend on their own server on AWS. They then worked on creating appropriate intents for handling users’ speech, and configuring the intents’ handlers for the application. Next came integration with the third-party APIs. Since the integration with the third-party APIs is a new process that hasn’t yet been properly documented, the Distillery team needed to reach out to the API developers with multiple questions to find a way to make it happen. While it wasn’t easy, it was gratifying to the team to be able to successfully navigate that process – and thereby become a massive step closer to turning nēdl’s vision into a reality.

Audials api explanation image

With the third-party APIs integration a success, the team’s next step is to proceed with building nēdl’s backend server. That server needs to be capable of (a) receiving a stream of user speech and converting it to text using the state-of-the-art automatic speech recognition software, (b) storing that text to a Search Index (including the user reference, text chunk, timestamp, and stream link), and (c) providing a search API that supports the Alexa skill to look for stream links in which the user’s search terms have recently been uttered. The last piece of the puzzle will involve creating the nēdl API that will perform the search against the search index, and integrating that API to play the stream.

So, in common English, what will nēdl’s users experience as a result of all this hard work? In overview:

  • The user will verbally provide a search request to Alexa, saying “nēdl, find [query].” The query can take the form of an artist, a musical genre, or a general description (e.g., “baseball,” “talk show,” “movies,” “Tom Petty”).
  • The app will convert the speech to text, searching the nēdlcast database and calling third-party APIs to find the most current matches.
    • nēdlcast database search results will begin with the most recent results.
    • Third-party APIs’ search results will be filtered by a lapsed time of less than 2 minutes and 30 seconds, and sorted by least lapsed time. (“Lapsed time” is the amount of time that has passed since the found stream was played on the radio.)
  • The results will be sent back to Alexa. Alexa will first provide the results from the third-party APIs, and then from the nēdlcast database. Iterating happens automatically as follows:
    • Results from the third-party APIs will play for 2.5 seconds.
    • Results from the nēdlcast database will play for 6.5 seconds.
    • If there’s only one result, the app will start playing that stream.
    • If there are no results, the user can initiate another search.
  • At whatever moment the user likes what they’re hearing and wants to continue playing the stream, the user will verbally give Alexa a “stop” command from a to-be-determined list of possible commands.
  • Finally, the user will feel immense wonder and glee at being able to so quickly and easily find an audio stream that matches their interests.

Distillery is elated to be helping Alakoye and the rest of the nēdl team to realize their vision of creating optimum searchability within the world of sound. We’re also thrilled to have had the opportunity to continue building on our capabilities and experience in working with AWS and Alexa.

After all, as nēdl says, “There is currently a race on: Will Radio figure out Technology first? Or will Technology figure out Radio first? With nēdl, Radio is figuring out Technology first and we like what that means for the future of radio.” Working with visionary clients like nēdl keeps Distillery on the front lines of that evolution, helping to figure out – via effective teaming on mobile app development – what that future will look and feel like in the palm of your hand.