fbpx

Software Engineer - Data Engineering
Wiley
Colombo

Location: 
Contract Type: 
Industry: 

Software Engineer - Data Engineering at Wiley

 

Applications are invited for all candidates to fill the Software Engineer - Data Engineering Job Opening at Wiley

Closing Date : 06/01/2022

 

At Wiley, we welcome you for who you are, the background you bring, and embrace individuals who get excited about learning whether online or by book. Learning is for everyone, and so is our workplace. Bring your experiences, your perspectives, and your passion. It is in our differences that we empower the way the world learns.
 
This role is an integral part of the Data Analytics & Insights team at Wiley that is responsible for key global data integration, delivery through BI tools, and data visualization solutions.
The Data Engineer will be responsible to develop and maintain data pipelines from multiple data sources into our Data Lake environment. This individual will work very closely with the other development leads, data engineers, and data architects to build and maintain data pipelines going to different zones of the Snowflake Data Warehouse environment. Determine the optimal approach for extracting, transforming, and loading data into different zones of Data Lake (Raw/Native, Processed/Transformed, Enriched, Archive). This includes design and development to prepare data for movement storage & consumption Views, ETL Processes, Extracts & other processes that manipulate, aggregate, clean, or enrich the data.
 
How you will make an impact:

  • Work with minimal supervision on a wide range of projects interfacing with development teams, business analysts, management, and business community while constantly ensuring that their work is aligned with the Data Analytics and Insights team’s strategic direction.
  • Stay current on technology and technique developments in the cloud computing space while also continuing to grow their knowledge of Wiley’s business.
  • Design and implement secure data pipelines into a Snowflake data warehouse from on-premise and cloud data sources.
  • Process and organize data in the data lake and data warehouse using industry-standard data modeling techniques.
  • Design and implement high-performing data pipelines feeding downstream systems.
  • Collaborate with the Quality engineering team to produce test strategy for pipelines and services.
  • Collaborate with other data engineers who will be building or testing data applications in the Snowflake environment.
  • Work with Business Analysts and Users to translate functional specifications into technical requirements and designs.
  • Define best practices and standards for data pipelining and integration with Snowflake data lake and warehouses in collaboration with Data Architect and other Data leads.
  • Ensure enterprise security and access control policies are adhered to in the solution.
  • Creation of architecture and design artifacts and documents.
  • Participate and contribute to design and code reviews.

 
We are looking for people who:

  • BSc in Computer Science, Engineering or equivalent.
  • 3+ years of experience in data engineering, ETL/ELT, and data warehousing solutions.
  • Excellent communication and presentation skills both verbal and written in English.
  • Demonstrate deep understanding of SQL and relational databases (Snowflake preferred).
  • Experience with Snowflake SnowSQL and writing user-defined functions would be a plus.
  • Strong understanding of various data formats such as CSV, XML, JSON, etc.
  • Hands-on experience with Python and/or Java development.
  • Hands-on experience with data pipeline tools within cloud-based environments (E.g. Airflow, Argo, etc.).
  • Hands-on experience in extracting data from multiple source types (E.g. relational DBs, file sources, RESTful APIs, Kafka, etc.).
  • Solid understanding of coding standards, engineering best practices, and tools usage (E.g. SonarQube, IDE Lints, etc.).
  • Solid understanding of Dimensional modeling including aspects such as Slowly Changing Dimensions, Late Arriving Facts / Dimensions, Star Schemas, etc.
  • Good understanding/experience in debugging slow-performing queries and optimization techniques.
  • Experience in writing performance-optimized codes.
  • Experience with any major cloud environments, AWS (preferred), Azure, or GCP.
  • Experience in producing technical diagrams and documentation.
  • Experience in developing/designing/integrating using Kubernetes.
  • Experience with open-source data engineering tools will be beneficial.
  • Experience with CI/CD tools and practices will be beneficial.
  • Good understanding of data privacy, governance, and security aspects including but not limited to data encryption, hashing, tokenizing, etc., and also an awareness of data privacy policies/standards such as PII, GDPR, PCI DSS, and HIPAA will be beneficial.
  • Experience in working in an agile development team.
  • Experience with JIRA usage will be beneficial.

 
About Wiley:
We are in one of the most dynamic periods in our history as technology, globalism, and economic diversity create far-reaching changes in the world. As a learning business, Wiley makes meaningful contributions to research discovery and lifelong learning by helping organizations achieve their goals and people achieve success from education through their careers. We may have been founded over two centuries ago, but our secret to success remains the same: change with the times and adapt to meet the ever-evolving needs of our customers. The company’s headquarters are in Hoboken, New Jersey, with operations in the U.S., Europe, Asia, Australia, and Canada.
 
Please attach your CV in order to be considered for this position.

 

 

Apply now : https://bit.ly/3lz8eQg

Contact Employer

If you are interested in this job, feel free to submit your info to the employer.

Find More Computer Science Jobs
Find More Wiley Jobs