Job Description
Job Title: Google Cloud Platform Data Architect
Location: Bentonville, AR
Duration: Long-term contract
Visa: No H1B's
Required Skills & Experience:
Mandatory Areas
Must have skills.
Overall Experience level: 10 + years
Tech stack :
- Google Cloud Platform
- Scala
- Spark
- Kafka
- Database (many) with data profiling /data mining skills
- Do data profiling, data mapping, build solutions etc
Senior Data Engineer is looking for a highly energetic and collaborative Senior Data Engineer with experience leading enterprise data projects around Business and IT operations. The ideal candidate should be an expert in leading projects in developing and testing data pipelines, data analytics efforts, proactive issue identification and resolution and alerting mechanism using traditional, new and emerging technologies. Excellent written and verbal communication skills and ability to liaise with technologists to executives is key to be successful in this role.
As a Senior Data Engineer, this is your opportunity to
- Assembling large to complex sets of data that meet non-functional and functional business requirements
- Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Google Cloud Platform/Azure and SQL technologies
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
- Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues
- Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues
- Strong background in data warehouse design
- Overseeing the integration of new technologies and initiatives into data standards and structures
- Strong Knowledge in Scala, Spark, PySpark, Python, SQL
- Experience in Cloud platform(Google Cloud Platform/Azure) data migration - Source/Sink mapping, Build pipelines, work flow implementation, ETL and data validation processing
- Strong verbal and written communication skills to effectively share findings with shareholders
- Experience in Data Analytics, optimization, machine learning techniques is added advantage
- Understanding of web-based application development tech stacks like Java, Reactjs, NodeJs is a plus
Key Responsibilities
- 20% Requirements and design
- 60% coding & testing and 10% review coding done by developers, analyze and help to solve problems
- 10% deployments and release planning You bring:
- Bachelor s degree in Computer Science, Computer Engineering or a software related discipline. A Master s degree in a related field is an added plus
- 6 + years of experience in Data Warehouse and Hadoop/Big Data
- 3+ years of experience in strategic data planning, standards, procedures, and governance
- 4+ years of hands-on experience in Scala
- 4+ years of experience in writing and tuning SQLs, Spark queries
- 3+ years of experience working as a member of an Agile team
- Experience with Kubernetes and containers is a plus
- Experience in understanding and managing Hadoop Log Files.
- Experience in understanding Hadoop multiple data processing engines such as interactive SQL, real time streaming, data science and batch processing to handle data stored in a single platform in Yarn.
- Experience in Data Analysis, Data Cleaning (Scrubbing), Data Validation and Verification, Data Conversion, Data Migrations and Data Mining.
- Experience in all the phases of Data warehouse life cycle involving Requirement Analysis, Design, Coding, Testing, and Deployment., ETL Flow
- Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters
- Experience in analyzing data in HDFS through Map Reduce, Hive and Pig is a plus
- Experience building and optimizing big data data pipelines, architectures and data sets.
- Strong analytic skills related to working with unstructured datasets
- Experience in Migrating Big Data Workloads
- Experience with data pipeline and workflow man
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Report this job
- Dice Id: 91163357
- Position Id: 8483072
Job Tags
Contract work, H1b,