Data Engineer - Python/ETL/Pipeline
Warehouse management system
Permanently Remote or Cambridge
Salary dependent on experience
As a Data Engineer you will work to build and improve the tools and infrastructure that the Data Scientists use for working with large volumes of data and that power user-facing applications. You will work on a nascent data pipeline with plenty of scope to guide its direction and growth, and regularly collaborate with both the Data Scientists and data providers in the wider business to understand their needs. You will build and customise open-source components to create a pipeline that will drive decision making and process improvement. You will create and maintain the ETL infrastructure on our own OpenStack private cloud, and work with Data Scientists to help them turn their models and analysis into production systems.
You will be a problem solver above all else, open to using new technologies and have an R&D mind-set. Strong academics help.
- Have at least 5 years experience working in software development.
- Be expert in Python. Ideally also be comfortable working in other languages when the need arises.
- Be skilled in common tools such as Puppet and Git, or equivalents
- Have experience working with PostgreSQL (or other relational database system)
- Be comfortable developing for distributed systems
- Have a keen interest in learning and a desire to help educate those around them in new technology
- Work effectively with a distributed team and in an agile environment
The brightest in the industry, location not being a factor they have gathered great problem solvers, mathematicians and scientist to work on this greenfield project.
Large logistics/warehouse solutions
Remote, 1 day fortnightly in the Cambridge Office.
GCS Computer Recruitment Services is acting as an Employment Agency in relation to this vacancy.
Role: Permanently Remote Data Engineer - Python/ETL/Pipeline
Job Type: Permanent
Location: Cambridge, Cambridgeshire,
Apply for this job now.