Publicis Sapient, the digital business transformation hub of Publicis Groupe, helps clients drive growth and efficiency and evolve the ways they work, in a world where consumer behaviour and technology are catalysing social and commercial change at an unprecedented pace.
With 19,000 people and over 100 offices around the globe, our expertise spanning technology, data sciences, consulting and creative combined with our culture of innovation enables us to deliver on complex transformation initiatives that accelerate our clients’ businesses through creating the products and services their customers expect. For more information, visit www.publicissapient.com.
Description du poste
Fancy joining a global organisation that is revolutionising the digital landscape? Today, as clients across industries are moving from digitally extending their businesses to placing digital at the core, Publicis Sapient has an unprecedented opportunity to help them succeed.
As a Data Engineer with our Data Engineering group, you will be responsible for the design and implementation of high end software products/services that enables the enterprise scale digital transformation of many of the biggest companies in the world.
Your role will be focused on delivering solutions that leverage large scale data ingestion, processing, storage/querying, in-stream & batch analytics. As part of a team, you will deliver world-class solutions designed by our senior architects by estimating, designing, coding, testing, deploying and ensuring scalability and performance using the latest frameworks and platforms
You will also be involved in building technology prototypes for validation and assess technical designs for functional & non-functional completeness.
As a hands-on technologist with a strong programming background you will be excited to join a super talented and supportive community of Data Engineers who are passionate about building the best possible solutions for our clients and endorse a culture of life-long learning and collaboration.
What you’ll bring:
- Good experience with Data related technologies, to include Big Data and data-related Cloud services (AWS / Azure / GCP)
- Good Hands on experience in at least one distributed data processing framework e.g. Spark (Core, Streaming, SQL), Storm, Flink etc
- Expertise with one or more of Java (preferable) Scala and Python programming languages
- Good data modelling experience to address scale and read/write performance
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Working knowledge of data platform related services on at least 1 Cloud platform to cover IAM and data security
- Hands-on skills building DevOps pipelines for data solutions, including automated testing
You’ll also likely have some of the following:
- Sound awareness of the larger data technology ecosystem including Hadoop, open-source frameworks as well as cloud services across batch and stream processing
- Experience of tuning and optimizing big data solutions
- Experience of data security (at-rest/in-transit)
- Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace/ Kafka, search & indexing and Microservices architectures
- Exposure to data governance, catalog, lineage and associated tools
- A certification in one or more cloud platforms or big data technologies
- Demonstrated active participation in the Data Engineering thought community (e.g. blogs, key note sessions, POV/POC, hackathons)
- You will enjoy client facing and/or consulting roles
- You’ll have good communication and presentation skills
- You’ll have good general analytical and problem solving skills
- You will pick up new technologies quickly, compare/contrast by way of POCs as well as research
- You’ll be a self-starter who requires minimal oversight and be able to prioritize and manage multiple tasks
- You will likely have a Bachelor’s/Master’s Degree in Computer Engineering, Computer Science, or a related field