Top 16% Solution to Kaggle’s Product Classification Challenge

Kaggle is a platform for predictive modelling and analytics competitions on which companies and researchers post their data and statisticians and data miners from all over the world compete to produce the best models. As of May 2016, Kaggle had over 536,000 registered users, or Kagglers. The community spans 194 countries. It is the largest and most diverse data community in the world (Wikipedia).

grafik

One of my first Kaggle competitions was the OTTO product classification challange. OTTO is one of the world’s biggest e-commerce companies. For this competition, OTTO has provided a dataset with 93 features (all features have been obfuscated) for more than 200,000 products. The objective was to build a predictive model which is able to distinguish between their main product categories. There are nine categories for all products.

Kaggle also allows users to publicly share their code on each competition page. It helped me a lot to check out some other people’s code before getting started. You can find my R script for the OTTO product classification challenge on my Github.

Advertisements

What’s Cooking on Kaggle? Top 20% Solution

Kaggle is a platform for predictive modelling and analytics competitions on which companies and researchers post their data and statisticians and data miners from all over the world compete to produce the best models. As of May 2016, Kaggle had over 536,000 registered users, or Kagglers. The community spans 194 countries. It is the largest and most diverse data community in the world (Wikipedia).

screen-shot-2013-09-19-at-4-39-23-pm-640x442

One of the most interesting data sets I found on Kaggle was within the What’s Cooking challenge. The competition was hosted by Yummly, a mobile app and website that provides recipe recommendations. The Yummly app was named “Best of 2014” in Apple’s App Store. The competition asks you to predict the category of a dish’s cuisine given a list of its ingredients. The training data included a recipe id, the type of cuisine, and a list of ingredients of each recipe. There were 20 types of cuisine in the data set.

I was able to get a prediction score of about 80 percent with a fairly easy solution. First of all I removed all rare ingredients in the data set. I did not do much feature engineering, except from creating one simple variable for which counts the total number of ingredients per recipe. I also tried some text mining in form of word stemming which brings back a recipes’ ingredient to its root word (e.g. tomatoes become tomato). That approach did not help much in the end so I removed it from my script. I saved my training data in a spare matrix and trained a multiclass classification model using softmax with the xgboost package.

Kaggle also allows users to publicly share their code on each competition page. It helped me a lot to check out some other people’s code before getting started. You can find my R script for the What’s Cooking challenge on my Github.