English  |   Dutch
Technology Big Data Architect Tampa, FL, USA
Job Description
  • In-depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Kafka
  • Good understanding/knowledge of Hadoop Architecture and various components such as HDFS, YARN and MapReduce concepts etc.
  • Experience with the practical application of data warehousing concepts, methodologies, and frameworks using traditional (Oracle, Teradata, etc.) and modern (SparkSQL, Hadoop, Kafka) distributed technologies.
  • Experience with enterprise data management, Business Intelligence, data integration, and SQL database implementations
  • Hands on experience in architecting, designing, and implementing data ingestion pipes on Cloudera/Horton Works platform at scale.
  • Experience in successfully manipulating, processing, and extracting value from large and disconnected data sets.
  • Architect and Develop data ingestion process, Experience working with structured unstructured data sets.
  • Data warehouse implementation, backup, and recovery strategies
  • Experience in Designing Reporting Application in Big data Platform.
  • Experience writing complex, high performance queries and experience with distributed querying like Spark SQL for Hive.
  • Minimum 2-3 years hands on experience with Spark with design and performance tuning.
  • Research and resolve all technical performance bottlenecks that implement Spark;
  • Should have experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms.
  • Actively participating in high level engineering team activities such as suggesting architecture improvements, recommending process improvements and conducting tool evaluations.
  • Experience working with complex data models, large databases, extensive reporting and data analysis is a plus.
  • Solution architecture experience in Big Data technologies e.g. Spark, Hadoop, MapR etc.
  • Hands on experience of working in Linux including Kerberos etc and provide profiling and optimization guidance and tuning tips and tricks
  • Data visualization experience. Data Catalog and data lineage experience is plus.
  • Experience in Data reconciliation between multiple sources, Data Quality tools and entitlements integration
  • Experience in providing solutions for handling data encryption, pii and confidential data sets.
  • Propose solutions for Web services, API solutions for Data Retrieval on Hadoop cluster.

Share this job

Are you legally authorized to work in the United States on a full-time, regular basis?
Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Are you ready to relocate for the job in the United States?
First Name
Last Name
Contact Number
One file only.
100 MB limit.
Allowed types: pdf doc docx.
Enter the characters shown in the image.


Synechron, Inc. and/or its affiliates and group companies takes your privacy seriously. By providing your information, you are signing up to receive information about Synechron services and related marketing. Your personal data will be protected in accordance with Synechron's Privacy Policy. By filling out this form, you are giving Synechron your consent so that we may communicate relevant information to you via email, telephone, invitations, and other digital notifications. If at any time you would like to withdraw your consent or update your profile and preferences, you can do so by clicking here or by contacting us directly.