Sr. Data Engineer-Azure, Snowflake, Python, SaaS
Salário Estimado
R$ 11.385,00 - R$ 17.078,00
Descrição da Vaga
Sr.
Data Engineer Who We Are Streamline is a fast-growing consultancy specializing in Enterprise Mobility, Product Engineering, and IT Transformation.
We’re building something special - a team of top-tier strategists, engineers, and designers who thrive on solving hard problems for enterprise clients.
If you want to be part of a company where your contributions are visible from day one, keep reading.
Role Summary The Senior Data Engineer designs, builds, and optimizes data pipelines that move, transform, and load data into Snowflake using Azure services and serverless components.
The role focuses on production-grade engineering: automating data quality, improving reliability, and continuing to optimize cloud infrastructure costs.
Role Responsibilities • Design, develop, and deploy Azure Functions and broader Azure data services to extract, transform, and load data into Snowflake data models and marts.
Qualifications & Skills • Strong expertise with Azure data stack (e.g., Azure Functions, Azure Data Factory, Event/Service Bus, storage) and Snowflake for analytical workloads.
Additional Requirements • Ability to adapt to a fast-paced and dynamic work environment.
What We Offer • A ground-floor opportunity to shape design culture and make an outsized impact at a growing company.
Requisitos
- Strong expertise with Azure data stack (e.g., Azure Functions, Azure Data Factory, Event/Service Bus, storage) and Snowflake for analytical workloads
- Proven experience designing and operating production data pipelines, including CI/CD, observability, and incident response for data systems
- Advanced SQL and performance tuning skills, with experience optimizing transformations and Snowflake queries for cost and speed
- Solid programming experience in Python or similar for building reusable ETL components, libraries, and automation
- Experience with streaming and batch ingestion patterns (e.g., Kafka, Spark, Databricks) feeding Snowflake
- Familiarity with BI and analytics tools (e.g., Power BI, Grafana) consuming Snowflake data models
- Background in DevOps practices, including containerization, CI/CD pipelines, and infrastructure-as-code for data platforms
- Experience with modern data transformation tools (e.g., dbt) and data observability platforms for monitoring data quality, lineage, and pipeline health
- Ability to adapt to a fast-paced and dynamic work environment
- Self-motivated and able to work independently with minimal supervision, taking initiative to drive projects forward
- Expert-level problem-solving skills with the ability to diagnose complex data pipeline issues and architect innovative solutions
- Proven ability to integrate and analyze disparate datasets from multiple sources to deliver high-value insights and drive business impact
- Strong problem-solving skills and attention to detail
- Proven ability to manage multiple priorities and deadlines
- Passionate about staying current with emerging data engineering technologies and best practices, driving innovation to enhance product capabilities and maintain competitive advantage
- Experience developing and architecting SaaS platforms with a focus on scalability, multi-tenancy, and cloud-native design patterns
Responsabilidades
- The Senior Data Engineer designs, builds, and optimizes data pipelines that move, transform, and load data into Snowflake using Azure services and serverless components
- The role focuses on production-grade engineering: automating data quality, improving reliability, and continuing to optimize cloud infrastructure costs
- Design, develop, and deploy Azure Functions and broader Azure data services to extract, transform, and load data into Snowflake data models and marts
- Implement automated data quality checks, monitoring, and alerting to ensure accuracy, completeness, and timeliness across all pipelines
- Optimize workloads to reduce cloud hosting costs, including right-sizing compute, tuning queries, and leveraging efficient storage and caching patterns
- Build and maintain ELT/ETL workflows and orchestration to integrate multiple internal and external data sources at scale
- Design data pipelines that support both near real-time streaming data ingestion and scheduled batch processing to meet diverse business requirements
- Collaborate with engineering and product teams to translate requirements into robust, secure, and highly available data solutions
Benefícios
Vagas Semelhantes
Engenheiro De Dados
R$ 12k - 18k/mês
Engenheiro De Dados - Detalhes da Vaga. ● Nossa especialidade é impulsionar as demandas de nossos clientes, integrando processos, pessoas e tecnologia de alta performance. ●Sobre o Cargo: Engenheiro de Dados Sênior (inglês fluente)Experiência RequeridaExperiência recomendada: Atuação em desenvolvime...
Gen AI Technical Lead (Remote)-1
R$ 12k - 19k/mês
Frederick, Maryland 21701 Posted March 7th, 2026 Looking for more job opportunities? Click here! Job Type: Full Time Job Category: IT Job Description Role: Gen AI Technical Lead Location: Frederick, MD /Remote FTE only Job Description Role Overview We are seeking an experienced AI Technical Lead to ...
Azure AI Lead Developer
R$ 12k - 18k/mês
We are looking for an experienced Azure AI Developer / Lead to design and build AI-driven solutions on Azure. The role involves developing applications using Azure AI services, implementing RAG-based LLM solutions, building rule engines for AI output validation, and coordinating with offshore teams ...
Senior LLM Engineer
R$ 11k - 12k/mês
Job Title: Sr. Python Developer Location: Chicago, IL Duration: Travel - 12 months Pay Range: $70.45 - $72.00 (W2) Job ID: 371286 About OpenKyber OpenKyber is a leading global IT consulting and workforce solutions firm providing services and support to Fortune 500 and government clients. Founded in ...
Informações
Análise de Vaga com IA
Estimativa salarial, match de tecnologias e análise de requisitos feitos com Inteligência Artificial
Powered by CodeCortex