The Data Engineer role focuses on those activities that support the integration, design and modeling, storage, and organization of data, as it’s transformed into information per business processes and consumed through analytics and insight reporting.
As such, this role is responsible for building industrialized data assets and optimized data pipelines for analytics solutions where data will be extracted, transformed and loaded from source systems to several solutions including, but not limited to data warehouse, data marts, multi-dimensional or tabular cubes, and analytical data sets.
As such a mentality and attitude for close collaboration across different functional teams is a must have quality for this individual.
This individual must have strong experience in building large complex data sets, be able to articulate best practices related to data quality and cleansing and be able to incorporate those practices into delivered solutions.
Leadership and interpersonal skills:
- Ability to work with the client directly to understand and meet their requirements
- Excellent verbal and written communication abilities: must effectively communicate with technical and non-technical teams
- Participate in teams working in an Agile/Scrum or Waterfall process and ensure the stories/tasks are well defined and have all the information and tools to be successful
- Work with the Project Manager and project stakeholders to ensure we meet our commitments
- Ability to work independently on tasks and deliver with a high-level of quality
- Ability to work in teams and be open to comments and feedback
- Ability to learn quickly and to adapt to a fast-paced environment
- Data processing, data design and modeling, deploying the model, visualization, and perform report and interpretation of the results
- Experience in how to treat and manipulate big data
- 2+ Python experience (ideally using PySpark, etc..)
- 2+ SQL skills, including the ability to write, tune, and interpret SQL queries; tool specific experience in Hadoop,
- Microsoft SQL Server, Oracle/PostGres
- Experience building large, complex big data sets and delivery mechanisms to support advanced analytics and insights analysis
- Knowledge of tools to perform data quality, data cleansing, data wrangling and data standards
Nice to have:
- Experience with data aggregation and manipulation using large data formats, e.g., Hadoop, Spark, Kafka, etc
- Experience working with SAS or R is beneficial but not required.
- Familiar with Data Profiling and Wrangling Tools like Alteryx, Trifacta, Paxata or Talend
- Experience with Tableau as well as designing and developing dashboards using Tableau Desktop for various organizational metrics and indicators using best practices and industry standards
- 3-5 years’ experience translating business needs to data requirements and designing data-driven solutions supporting analytics and insights
Nom de l'employeur
IBM Client Innovation Center
Lieu de l'emploi
2 to 4 years
Nombre de postes disponibles
Jusqu'au 31 October 2019