Our client is a multinational company specialized in IT outsourcing.
Requirements:
— Minimum 3-8 years of experience with data modeling, data mapping, business glossary creation;
— Extensive experience with defining and implementing data models (conceptual, logical, and physical);
— Good to have design and development experience of domain data model (ideally Pharmaceutical Manufacturing, Supply Chain, Quality Assurance domains);
— Expert on different data modeling methodologies;
— Working knowledge of data modeling tools such as Erwin, ER/Studio, etc., data cataloging tools;
— Experience in building data pipelines using spark (Scala or PySpark) on data bricks;
— Development experience in the Amazon Cloud Environment AWS (S3, EMR, Databricks, Amazon Redshift, Athena);
— Experience in working with REST APIs;
— Ability to perform data manipulations, load, extract from several sources of data into another schema;
— Ability to work with multiple file formats (JSON, xml, rdf, reports, etc.) and analyze data, if required for further processing;
— Experience in DevOps is nice to have but knowledge about DevOps is required;
— Understanding of core AWS services, and basic AWS architecture best practices;
— Snowflake experience will be a plus;
— Ability to understand requirements and changes to requirements;
— Ability to understand business processes;
— Strong written, verbal and listening communication skills.
Responsibilities:
— Work with business stakeholders and subject matter expert (SME) to understand business processes and translate them into data models;
— Develop and maintain conceptual and logical data models using data modeling guidelines from the clients;
— Document and maintain business glossary in the enterprise data catalog solution;
— Evaluate business data models and physical data models for variances and discrepancies;
— Support project team in adopting business data models;
— Guide project team to map physical data models to the business glossary.