Technical challenges excite you, and you enjoy finding simple and elegant solutions to complex problems. You care about quality, performance and time to production. You thrive in a highly collaborative, agile, open and honest work environment. You want to bring new ideas to the table, along with a plan to implement them. You want to help define the vision for the software that you will build, and you care about the business impact of your ideas.
The Data Engineer will be responsible for developing, tuning, and implementing data pipelines for our in house reporting solution by sending data to and from a Hadoop infrastructure. Deep knowledge of SQL, preferably on Hadoop/HBase (Hive, Impala, Phoenix), but large scale ETL scripting is acceptable. This person will work alongside the data architect and front end developer to surface data, metrics and insights by building and maintaining a fault tolerant data pipeline.
WHAT YOU’LL DO:
WHAT WE’RE LOOKING FOR:
BONUS POINTS IF YOU:
WHAT WE OFFER:
WHO WE ARE:
We provide global financial settlement solutions to ultimately enable the world to exchange value like it already exchanges information – giving rise to an Internet of Value (IoV). Ripple solutions lower the total cost of settlement by enabling banks to transact directly, without correspondent banks, and with real-time certainty of settlement. Banks around the world are partnering with Ripple to improve their cross-border payment offerings, and to join the growing, global network of financial institutions and market makers laying the foundation for the Internet of Value.
We’re backed by prominent investors like Google Ventures, Andreessen Horowitz, CME Group, Santander Bank and Seagate Technology. We’re headquartered in San Francisco’s Financial District with offices in New York and Sydney. As an industry advocate for the Internet of Value, Ripple sits on the Federal Reserve’s Faster Payments Task Force Steering Committee and co-chairs the W3C’s Web Payments Working Group.