Avkash Chauhan's Blog

Windows Azure, Windows 8, Cloud Computing, Big Data and Hadoop: All together at one place.. One problem, One solution at One time...

Processing Million Songs Dataset with Pig scripts on Apache Hadoop on Windows Azure

Processing Million Songs Dataset with Pig scripts on Apache Hadoop on Windows Azure

Rate This
  • Comments 4

The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks.

Its purposes are:

  • To encourage research on algorithms that scale to commercial sizes

  • To provide a reference dataset for evaluating research

  • As a shortcut alternative to creating a large dataset with APIs (e.g. The Echo Nest's)

  • To help new researchers get started in the MIR field

Full Info:


Download Full 300GB full data Set:


MillionSongSubset 1.8GB DataSet:
To let you get a feel for the dataset without committing to a full download, we also provide a subset consisting of 10,000 songs (1%, 1.8 gb) selected at random:
MILLION SONG SUBSET:

It contains "additional files" (SQLite databases) in the same format as those for the full set, but referring only to the 10K song subset. Therefore, you can develop code on the subset, then port it to the full dataset.

To Download 5GB, 10,000 songs subset use link below:


To Download any single letter slice use link below:


Once you download you can copy the data directly to HDFS using:

>Hadoop –fs –copyFromLocal <Path_to_Local_Zip_File> <Folder_At_HDFS>

Once file is available you can verify it at HDFS as below:
grunt> ls /user/Avkash/
hdfs://10.114.238.133:9000/user/avkash/Z.tsv.m

Now you can run following PIG scripts on the Million Songs Data Subset:

 grunt> songs = LOAD 'Z.tsv.m' USING PigStorage('\t') AS (track_id:chararray, analysis_sample_rate:chararray, artist_7digitalid:chararray, artist_familiarity:chararray, artist_hotttnesss:chararray, artist_id:chararray, artist_latitude:chararray, artist_location:chararray, artist_longitude:chararray,artist_mbid:chararray, artist_mbtags:chararray, artist_mbtags_count:chararray, artist_name:chararray, artist_playmeid:chararray, artist_terms:chararray, artist_terms_freq:chararray, artist_terms_weight:chararray, audio_md5:chararray, bars_confidence:chararray, bars_start:chararray, beats_confidence:chararray, beats_start:chararray, danceability:chararray, duration:chararray, end_of_fade_in:chararray, energy:chararray, key:chararray, key_confidence:chararray, loudness:chararray, mode:chararray, mode_confidence:chararray, release:chararray, release_7digitalid:chararray, sections_confidence:chararray, sections_start:chararray, segments_confidence:chararray, segment_loudness_max:chararray, segment_loudness_max_time:chararray, segment_loudness_max_start:chararray, segment_pitches:chararray, segment_start:chararray, segment_timbre:chararray, similar_artists:chararray, song_hotttnesss:chararray, song_id:chararray, start_of_fade_out:chararray, tatums_confidence:chararray, tatums_start:chararray, tempo:chararray, time_signature:chararray, time_signature_confidence:chararray, title:chararray, track_7digitalid:chararray, year:int);

grunt> filteredsongs = FILTER songs BY year == 0 ;

grunt> selectedsong = FOREACH filteredsongs GENERATE title, year;

grunt> STORE selectedsong INTO 'year_0_songs' ;


grunt> ls year_0_songs
hdfs://10.114.238.133:9000/user/avkash/year_0_songs/_logs <dir>
hdfs://10.114.238.133:9000/user/avkash/year_0_songs/part-m-00000<r 3> 15013
hdfs://10.114.238.133:9000/user/avkash/year_0_songs/part-m-00001<r 3> 12772

grunt> songs1980 = FILTER songs BY year == 1980 ;

grunt> selectedsongs1980 = FOREACH songs1980 GENERATE title, year;

grunt> dump selectedsongs1980;
…….
(Nice Girls,1980)
(Burn It Down,1980)
(No Escape,1980)
(Lost In Space,1980)
(The Affectionate Punch,1980)
(Good Tradition,1980)

Now Joining these two results selectedsong and selectedsong1980
[Inner Join is Default]

grunt> final = JOIN selectedsong BY $0, selectedsongs1980 BY $0;

grunt> dump final;

(Burn It Down,0,Burn It Down,1980)


Now Joining these two results selectedsong and selectedsong1980

[Adding LEFT, RIGHT, FULL]


grunt> finalouter = JOIN selectedsong BY $0 FULL, selectedsongs1980 BY $0;

grunt> dump finalouter;

(Sz,0,,)
(Esc,0,,)
(Aida,0,,)
(Amen,0,,)
(Bent,0,,)
(Cute,0,,)
(Dirt,0,,)
(Rome,0,,)
….
(Tongue Tied,0,,)
(Transferral,0,,)
(Vuelve A Mi,0,,)
(Blutige Welt,0,,)
(Burn It Down,0,Burn It Down,1980)
(Fine Weather,0,,)
(Ghost Dub 91,0,,)
(Hanky Church,0,,)
(I Don't Know,0,,)
(If I Had You,0,,)
….
(The Eternal - [University of London Union Live 8] (Encore),0,,)
(44 Duos Sz98 (1996 Digital Remaster): No. 5_ Slovak Song No. 1,0,,)
(Boogie Shoes (2007 Remastered Saturday Night Fever LP Version),0,,)
(Roc Ya Body "Mic Check 1_ 2" (Robi-Rob's Roc Da Jeep Vocal Mix),0,,)
(Phil T. McNasty's Blues (24-Bit Mastering) (2002 Digital Remaster),0,,)
(When Love Takes Over (as made famous by David Guetta & Kelly Rowland),0,,)
(Symphony No. 102 in B flat major (1990 Digital Remaster): II. Adagio,0,,)
(Indagine Su Un Cittadino Al Di Sopra Di Ogni Sospetto - Kid Sundance Remix,0,,)
(Piano Sonata No. 21 in B flat major_ Op.posth. (D960): IV. Allegro non troppo,0,,)
(C≤rtame Con Unas Tijeras Pero No Se Te Olvide El Resistol Para Volverme A Pegar,0,,)
(Frank's Rapp (Live) (24-Bit Remastered 02) (2003 Digital Remaster) (Feat. Frankie Beverly),0,,)
(Groove On Medley: Loving You / When Love Comes Knocking / Slowly / Glorious Time / Rock and Roll,0,,)
(Breaking News Per Netiquettish Cyberscrieber?s False Relationships In A Big Country (Album Version),0,,)

Using LIMIT:

grunt> finalouter10 = LIMIT finalouter 10;

grunt> dump finalouter10;

(Sz,0,,)
(Esc,0,,)
(I&I,0,,)
(Uxa,0,,)
(ZrL,0,,)
(AEIO,0,,)
(Aces,0,,)
(Aida,0,,)
(Amen,0,,)
(Bent,0,,)


Leave a Comment
  • Please add 1 and 4 and type the answer here:
  • Post
  • hi avkash, how did u get millionsongs data in .tsv format, as it is available in .h5 format??

  • I would also like to know where you can get the .tsv version.

  • how do we get the tsv.m file?

  • Here are wrappers in various programming languages which allow you to parse and read data from an hdf5 file. You can take any on of them, modify the code to store it in another form as TSV, CSV, etc.

    labrosa.ee.columbia.edu/.../code

Page 1 of 1 (4 items)