Data Science Engineer

Job Description

Rakuten Group provides over 70 internet services to over a billion users around the world.  Needless to say, we collect tons of data.  In order to make sense of all this, we are seeking highly motivated data science engineers who can partner with our data scientists in creating various data centric solutions for our businesses.  

Here are 2 examples of projects you would be working on:

1. Improving UX through data

Creating the architecture and algorithms that provide customized contents to allow them to discover their favorite items and browse seamlessly through over 100MM items and over 40000 merchant pages on our E-Commerce platform.  Of course, your challenge will be to 1. understand the content of the EC platform; 2. understand the customers and their preferences; 3. match the contents with the customers' preferences and 4. identify the metrics to measure the satisfaction of the customers.  We would love to solve these exciting problems with candidates.

2. Empowers clients (merchants/hotels/publishers) on various Rakuten platformsDesigning a scalable marketing platform based on data science and business intelligence where merchants and hotels can segment and target their customers easily without any sophisticated analytical knowledge.  One particular project we are pursuing right now is called "Story Optimization Project" where we empower merchants to improve their item display page through machine learning technology. Using a huge number of customer behavior data the item page created by the merchants, we have the very unique opportunity to help our merchants customize their page to in order to improve their customer satisfaction.

Responsibilities

The Data Science Engineer will be expected to understand structured and unstructured data, to create and automate the processing of that data such (ETL) in order to design a data mart and to optimize our data architectures.  You will be working closely with other engineers and data scientists to apply data science solution to solve specific business questions.

QualificationsMinimum Qualifications:

  • 2+ years experience in software development practices;
  • 2+ years experience in big data technologies such as Hadoop, Teradata, Spark, MongoDB;
  • Experience of building and operating data warehouse solutions;
  • Be proficiency in using Hadoop, Teradata and related tool kits ;
  • Be proficient in designing and optimizing data mart for specific requirements;
  • Be proficient in designing efficient and robust ETL workflows.

Preferred Qualifications:

  • Be able to work in a team-oriented environment;
  • Be able to work autonomously in Agile Scrum processes;
  • Solid knowledge of large volumes data processing;
  • Experience with Big Data ML toolkits;
  • Experience with NoSQL databases, like Cassandra, MongoDB;
  • Experience and knowledge in building or maintaining data science solutions;
  • Familiar with data mining concepts and machine learning algorithms.