Bromley Business Centre, 27 Hastings Road, Bromley, Kent BR2 8NA, UK

Data Engineer

Position Data Engineer
Type of Hiring Contract
100% REMOTE

Must have skills
· Python(Pandas)
· Numpy
· DataBricks
· Pyspark
· ADF

Data Engineer

Primary : Power Bi + SSRS + ADF + SQL
: Power apps, paginated reports

Data Engineer

Role – Data Engineer
Location – Warsaw Poland
Type – Contract

Your key responsibilities include:

· Contributing to the development of data architecture for the enterprise by developing the models and standards by which data is sourced, stored, distributed and governed
· Contributing to the design of solutions that will catalog, organize and store data domain taxonomy for business to access, manage and consume
· Provides support to the upkeep of data dictionaries, source data mapping and linkage to efficiently track business and technical lineage of the data end to end (from Source to Service line consumption)
· Performing due diligence to confirm the developed solution complies with architectural design
· Coordinating across multidisciplinary teams to understand data requirements and recommending internal process improvements
· Maintaining and contributing to data management standards to promote optimization and consistency

Must Have Skills;

Azure Data Lake Storage Gen2
Azure Databricks
Spark SQL Server Management Studio
Azure Data Factory
PySpark Server

Data Engineer

We are looking for a Data Engineer who can be based in Nordics/Poland/Netherlands/Sweden
Contract: 12months
Rate: Open

Skillset
Power BI, DAX
Databricks
Python / PySpark
SQL
SQL T-SQL stored procedure programming
Demonstrated experience of C# and SQL development, ability to analyze SQL execution plans and troubleshooting
SQL Server 2016 and above
Function Apps, C#
Power Shell
Power Apps
Experience with ETL tools: SSIS, ADF
Experience with big data tools like Delta Lake, Databricks
Experience with relational SQL and NoSQL databases (Tabular and DAX)
Power BI (M, DAX, visuals, sharing) experience
DP-200: Implementing an Azure Data Solution and DP-201: Designing an Azure Data Solution skills
Experience working in scrum or with scrum teams
Experience working in customer-facing situations
Experience with object-oriented/object function scripting languages is beneficial: Python, Java, C++, Scala, etc.
Experience with DWH automation tools is beneficial: WhereScape, BimlFlex etc.
Data Engineer would be working in one of the scrum teams of Asset Analytics which are providing Analytics Solutions for all business areas.

Responsibilities for Data Engineer
· Create and maintain optimal data pipeline architecture,
· Assemble large, complex data sets that meet functional / non-functional business requirements.
· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
· Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure
· Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
· Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
· Keep our data separated and secure across national boundaries through multiple data centers
· Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
· Work with data and analytics experts to strive for greater functionality in our data systems.
· Qualifications for Data Engineer
· Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
· Experience building and optimizing big data data pipelines, architectures and data sets.
· Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
· Strong analytic skills related to working with unstructured datasets.
· Build processes supporting data transformation, data structures, metadata, dependency and workload management.
· A successful history of manipulating, processing and extracting value from large disconnected datasets.
· Working knowledge of message queuing, stream processing, and highly scalable big data data stores.

Data Engineer

Role – Data Engineer
Location – London
Type – Permanent

· Design and develop foundational micro-services exposing APIs using object-oriented programming language.
· Design and develop scalable distributed data pipelines using cluster-compute frameworks.
· Develop scalable system designs that solve business problems.
· Apply best practices, algorithms, design patterns and data structures to produce maintainable code.
· Demonstrate structured approach to software development (design, develop, test, instrument and monitor).
· Act as a lead on design, code, operational reviews and be a team-player.
· Mentor software and data engineers on the team.

· Bachelors or Masters or equivalent technical work experience.
· 8+ years of work experience developing software in production.
· Proficient in one or more object-oriented programming languages such as Java, Scala.
· Developed and maintained software systems in a production environment.
· Experienced working in a lean and agile environment.
· Top 3 desired skill sets – Java, large-scale web services and distributed systems
· Prior experience working with AWS technologies such as EC2, DDB, S3, API Gateway, Data Pipelines.

Data Engineer

Role Data Engineer
Location Spain
Contract/Permanent

Professional skills and qualification:
· Hands-on experience with ETL process design, implementation, and use of ETL tools;
· Good understanding of data processing principles;
· Good understanding of software development lifecycle;
· Good analytical and communication skills;
· Fluent level in English reading/speaking/writing.
Job experience/knowledge of:
· RDBMS (Microsoft SQL, Oracle, PostgreSQL or similar);
· Git;
Following experience will be considered as an advantage:
· Map-Reduce frameworks (Spark, Data Bricks);
· Python and data processing frameworks (Pandas);
· Packaging Python components and continuous automatic tests;
· Pipeline orchestration frameworks (Kedro, Airflow, Luigi, Dask).
Responsibilities (may vary depending on title level):
· Complete assigned software development tasks, document and test software code within defined timeframes and according to the company and project standards;
· Review software code, report review results and implement required improvements;
· Work closely with data architects and business analysts;
· Work closely with software testing team and assist them when required.

Data Engineer

Role Data Engineer
Location United Kingdom
Contract/Permanent

Professional skills and qualification:
· Hands-on experience with ETL process design, implementation, and use of ETL tools;
· Good understanding of data processing principles;
· Good understanding of software development lifecycle;
· Good analytical and communication skills;
· Fluent level in English reading/speaking/writing.
Job experience/knowledge of:
· RDBMS (Microsoft SQL, Oracle, PostgreSQL or similar);
· Git;
Following experience will be considered as an advantage:
· Map-Reduce frameworks (Spark, Data Bricks);
· Python and data processing frameworks (Pandas);
· Packaging Python components and continuous automatic tests;
· Pipeline orchestration frameworks (Kedro, Airflow, Luigi, Dask).
Responsibilities (may vary depending on title level):
· Complete assigned software development tasks, document and test software code within defined timeframes and according to the company and project standards;
· Review software code, report review results and implement required improvements;
· Work closely with data architects and business analysts;
· Work closely with software testing team and assist them when required.

Data Engineer

Role: Azure Data Engineer
Location: WFH
Duration: 6 months
Rate: GBP400 per day Outside IR35 (negotiable)

Essential skills and experience
· Strong technical expertise using core Microsoft Azure data technologies: Data Factory, Databricks, Data Lake, Azure SQL, Synapse, Data Catalog, Purview
· A strong background in Python development, particularly for data engineering, and SQL Experience working with Power BI: Power Query, DAX, Power BI Service
· Experience of team working and task tracking with Azure DevOps following Scrum / Agile methodologies.
· Experience with Azure databases (Synapse EDW, SQL Managed Instance, Azure SQL)
· Excellent communication and interpersonal skills
· Experience with TDD, Continuous Integration, Continuous Deployment, Peer Review, Automated Testing for DataOps solution delivery
Duties
· Engineering cloud-based data lake patterns to connect and leverage organisational & external data
· Capture, Ingest, Process and Store large volumes of data and activate for multiple data science use cases
· Enable higher quality data to be created & managed in our customers organisations and standardise on the definition of business-critical data
· Develop data products that are commercially viable and technically excellent through utilising the (primarily) cloud native patterns available to the market
· Help to diagnose and fix issues in complex technical environments, provide technical support and guidance through prototyping, testing, build, and launch of data products
· Help to understand customer requirements and work collaboratively with other architects and consultants to design data solutions that deliver the expected outcomes.

Data Engineer

Data Engineer
Initial 12 to 18 Month C2C Contract or Full Time Permanent Employment
Starting ASAP
Remote working (West Coast Based Project)
Hourly rate & Salary Negotiable depending on experience

Must be Green Card or US Citizen Only

JD 8: Data Engineer

Responsibilities:

· Design and build data transformations efficiently and reliably for different purposes (e.g. reporting, growth analysis, multi-dimensional analysis)
· Design and implement reliable, scalable, robust and extensible big data systems that support core products and business
· Establish solid design and best engineering practice for engineers as well as non- technical people

Minimum qualifications:

· BS or MS degree in Computer Science or related technical field or equivalent practical experience
· Experience with the Big Data technologies (Hadoop, M/R, Hive, Spark, Metastore, Presto, Flume, Kafka, ClickHouse, Flink etc)
· Experience in performing data analysis, data ingestion and data integration
· Experience with ETL(Extraction, Transformation & Loading) and architecting data systems
· Experience with schema design, data modeling and SQL queries
· Passionate and self-motivated about technologies in the Big Data area

Data Engineer

Our client is looking for a Data Engineer to join them on an exciting project in St Denis, France.

Context of the mission

The Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Resource must be self-directed and comfortable supporting the data needs of multiple teams, systems and products and able to learn cutting edge technologies on job.

Mission objectives and tasks

The profile sought will be particularly involved in the following day to day activities:

Develop and maintain large scale data processing system
Ensure ETL pipelines constructed will support high volume data streams
Discover opportunities for data acquisition, storage and querying in a performant way
Develop data set processes for data modeling, mining and production
Employ variety of languages tools (awareness to opensource apache data related frameworks is a must have competence)
Able to understand, customize and maintain opensource projects and contribute to development of new state of the art frameworks
Experience in DevOps (based on docker and Kubernetes)
Deliverables

Source code (application components and unit tests)
knowledge

Experience with big data tools: Hadoop, Spark, Kafka, NoSQL DBs, NiFi, Beam, Zookeeper …
Construct data ETL pipelines that are scalable and fault tolerant based on big data technologies
Knowledge of teamwork on a dev platform
Agile approach (SCRUM, SAFE), DevOps
Inference implementation based on deep learming models with frameworks like Apache MXNet, Tensorflow, Keras
Fluency in English and French essential