
กรุงเทพมหานคร
29 สิงหาคม 2022
คนเห็น 23,453 คน
คนสนใจ 23 คน
Responsibilities - Lead data engineers and complex projects to completion. - Design, plan, prioritize and be responsible for the project. - Build scalable, maintainable data pipelines (batch/streaming-ingestion/ELT/ETL/data products) and ensure their quality/reliability/integrity. - Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility and foster data-driven decision-making across the organization. - Coach and mentor junior data engineers to develop their skills.
- Bachelor’s Degree in Computer Science, Software Engineering, Information Technology, or equivalent industry experience. - 6 years experience in Big Data technologies and their ecosystem. - Proficient in SQL, Python or Linux & Unix. - Experience in the Hadoop ecosystem such as HDFS, Spark, Hive, Sqoop, Airflow, Oozie, Ranger, Ambari, Flink. - Experience in cloud computing technologies such as AWS, Azure, GCP. - Experience working with relational databases such as MySQL, PostgreSQL, SQL Server, Oracle. - Experience working with NoSQL databases such as MongoDB, HBase, Cassandra, Bigtable, DynamoDB, and Cosmos DB. - Experience working with search engines tools like ElasticSearch. - Experience in end to end data management solutions. - Experience in data migration tools such as Fivetran, Informatica, database migration tools. - Ability to design data lake, data warehouse, data mart based on AWS, Azure, GCP and on-premise. - Understanding of data lake management such as life cycle management, storage class design, and access control. - Ability to optimize data warehouse and data mart such as indexing, clustering, and partitioning. - Ability to design data modeling (schema design) such as star schema, snowflake schema, fact table, dimensional table. - Experience in ETL/ ELT solutions for both on cloud and on-premise. - Ability to develop ETL/ELT solutions in Python, Spark, SQL. - Experience in container management - Docker and Kubernetes. - Understanding of real-time and batch processing. - Experience in real-time processing (streaming) tools such as Apache Kafka, RabbitMQ, Cloud Pub/Sub, Azure Event Hubs, Amazon Kinesis. - Experience in workflow orchestration, monitoring or data pipeline tools such as Apache Airflow, Azure Data Factory, Luigi, NiFi, AWS Step Function. - Innovative problem-solving skills with the ability to identify and resolve complex architectural issues. - Ability to communicate clearly and work closely with cross-functional teams such as Data Analyst, Data Visualization, Software Engineering, and businesses functions. - Good command of English. - Excellent organizational and leadership skills. - A good team player.
- Work from anywhere - 10 days of vacation leave per year (1st year prorate) - Project Success Celebration - Team building Activity and Company Outing - Training Development and Career Opportunity - 1-time Performance bonus and 2-time salary increment per year - GH BANK Home Loan - Social security contributions - Allianz health insurance after passing probation period - Annual physical check - Other benefits based on company policy
About us Coraline is a Big Data Solution company focused on solving business problems by creating value for Data in a full service starting from the data management to in-depth analysis and the goal of bringing data as solutions and creating data driven action in organization. Coraline was founded in 2017 from the vision of increasing business potential using data by Dr. Asama Kulwanichchaiyanan who is an expert in Data Science and has experience in Big Data projects for the USA government and many other industries in Thailand over 10 years.