Data Engineer
Full Time Employee
Job Summary
Onsite Data Engineer Job Description
At Outsourced, we connect top talent with exciting opportunities at innovative global companies. We partner with fast-growing businesses around the world to help them build high-performing teams!
We are seeking a highly skilled Data Engineer to design, build, and optimize data pipelines and solutions within our Microsoft-centric data ecosystem. The ideal candidate will have hands-on experience with modern data engineering tools, cloud platforms, and a strong understanding of data integration, transformation, and storage. You will play a critical role in ensuring our data is reliable, scalable, and easily consumable by business users, analysts, and data scientists. Responsibilities - Design, develop, and maintain scalable ETL/ELT pipelines for ingesting and transforming structured and unstructured data.
- Work with Microsoft Fabric to build dataflows, lakehouses, and end-to-end data solutions.
- Develop and optimize data storage solutions using Azure Data Lake Storage, Azure Synapse, and Azure SQL Database.
- Implement streaming and batch processing using tools like Apache Spark, Azure Stream Analytics, and Kafka.
- Collaborate with data architects to implement data models, governance, and security standards.
- Schedule and orchestrate pipelines using Azure Data Factory or Apache Airflow.
- Ensure data quality, integrity, and availability through validation, monitoring, and error-handling frameworks.
- Support advanced analytics and machine learning by delivering clean, curated, and well-documented datasets.
- Work with stakeholders to understand business needs and deliver data solutions aligned with objectives.
Required Skills & Experience - Strong programming skills in Python, SQL, and familiarity with PySpark/Spark.
- Hands-on experience with Microsoft Fabric (dataflows, lakehouses, pipelines, Synapse integration).
- Proficiency in Azure data services: Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake Storage.
- Knowledge of big data frameworks (Spark, Databricks, Hadoop ecosystem).
- Experience with data streaming technologies such as Kafka or Azure Event Hubs.
- Solid understanding of data modeling, warehousing, and performance optimization.
- Familiarity with cloud architecture and security best practices in Azure.
- Experience with orchestration & workflow automation tools (Airflow, Data Factory pipelines).
- Strong problem-solving and collaboration skills with both technical and non-technical teams.
Nice to Have - Experience with dbt for transformations.
- Familiarity with Power BI for enabling analytics.
- Understanding of CI/CD for data pipelines (e.g., GitHub Actions, Azure DevOps).
- Exposure to machine learning workflows and MLOps.
Education & Background - Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
- 4-5 years of professional experience in data engineering.