Data Engineer

Fond du Lac
Full Time

It is our people behind life’s passions who will make the big difference. If you are interested in becoming part of a company that delivers market leading products, driving your own career and working with brands committed to active lifestyles, then you’ve found your fit.  

Have what it takes? Join us.

The Data Engineer will provide technical leadership and support for our existing (and future) data and analytics platforms, ensuring uninterrupted, easy, secure, and robust retrieval of data needed for analytics and insights.

The Data Engineer is responsible for implementing next generation, highly available and scalable data pipelines for use across various functions of the company. If you want to make a large impact to a highly visible team, this is your chance. This person will help determine usage patterns, recommend and manage new technologies and proof of concepts, lead data ingestion and integration processes; partner closely with the Enterprise Data & Analytics Architect and the Security team; understand the holistic view of the Big Data platform.

The Data Engineer will build, test, and maintain robust, scalable, secure, highly available data pipelines and reservoirs that store, clean, transform, and aggregate raw data into data sources. A successful Data Engineer will be well versed in Microsoft technologies (Synapse & Synapse pipelines, DLGen2, Data Factory, Event Hub, CosmoDB, Azure SQL Server, Azure DevOps, etc.), Spark clusters, SCADA systems, data lake construction and maintenance, and logical design. Equivalent experience in AWS will be considered.

The Data Engineer will participate in project meetings / teams to ensure the right data is captured in the right manner. He/she must be a solid team player who is inquisitive, creative, adaptable, tenacious, and a solid communicator who understands their audience.


  • Design, construct, install, test, and maintain robust, scalable, secure, and fault-tolerant data management systems
  • Research and advise the MDM team on data management best practices, procedures, and standards
  • Integrate existing enterprise and one-off data sources into platform
  • Develop APIs and Microservices that support data pipelines and data integration
  • Ensure business users and teammates have access to the appropriate data sources
  • Extensive knowledge of various databases
  • Closely collaborates with business users & subject matter experts (i.e. enterprise architects, modelers and IT team members)
  • Keep up to date on cutting edge and established open-source projects as well as proprietary big data solutions
  • Makes decisions and recommendations on project priorities, functional design changes, process improvements and problem resolution
  • Ensures that accurate and thorough documentation is maintained
  • Monitor and troubleshoot performance issues for production pipelines
  • Collaborate with development team to create the right data pipeline for the task.
  • Collaborate with data scientists to design scalable pipelines of their models.
  • Identify the right choice of technology element for current and new problems.
  • Lead by example to bring in best practices in implementing robust data pipelines.
  • Work cross-functionally with teams on data migration, translation, and organization initiatives
  • Develop data processes for data workflow into production environments
  • Translate large volumes of raw, unstructured data into easily digestible formats
  • Recommend ways to improve data reliability, efficiency, and quality
  • Design and build data pipelines from various data sources to a target data lake house using batch and streaming data load strategies utilizing cutting edge cloud technologies.
  • Conceptualizing and generating infrastructure that allows data to be accessed and analyzed effectively.


  • Demonstrated 2+ years of experience:
    • designing, constructing, securing, and maintaining data management systems such as data lakes, data lake houses and logical data warehouses.
    • ensuring that data flows smoothly from source to destination and supports business users such as analysts and data scientists in their ability to access data for analytics.
    • Implementing multi-terabyte data solutions
    • Leading data-related projects and project teams
    • SQL based technologies (SQL Server, Oracle)
    • NoSQL technologies (Cassandra, MongoDB, CosmosDB)
    • Experience developing APIs and microservices
  • Experience working on Cloud environments – preferably Azure.
  • Proven ability to gather requirements and lead projects in both agile and waterfall environments
  • Requires analytical ability, creativity, and judgment in analyzing, developing, and implementing technology solutions.
  • Ability to work in teams, fostering a collaborative environment.
  • Proven experience in an environment with multiple tasks and changing priorities.
  • Must have curiosity to know how things work and how to make them better.
  • Must be detailed oriented and comfortable understanding, as well as explaining the intricacies of how and why a data pipeline works as it does.
  • Experience building and optimizing large-scale data pipelines and data-centric applications using Big Data tooling like Hadoop, Spark, Hive, and Airflow in a production setting preferably on Azure Cloud.
  • Hands-on experience deploying Data Engineering tooling from the Microsoft Azure Data Engineering and Analytics stacks including Azure Synapse Analytics, Azure data factory, Azure Event Hub, Azure Databricks, Active Directory, and PowerBI. Equivalent experience using AWS can be substituted.
  • Experience in Agile development, Azure DevOps and Terraform
  • Experience working with structured and unstructured data
  • Bachelor’s degree in Computer Science, or Engineering, or in a related field
  • Comprehensive knowledge of modern data engineering tools like Docker, Kafka, Spark, PySpark and Hadoop
  • Proficient with more than one of the following languages: C, C++, C#, Java, Python
  • Experience with orchestration
  • Working experience deploying pipelines in one of the following: AWS, GCP, Azure
  • Experience in ML frameworks (Pytorch, Tensorflow, etc.) and libraries (scikit-learn)
  • Experience with Terraform is a plus
  • Perform adequate unit test for applications developed and validate expected results
  • Troubleshoot reported defects to determine the root cause and implement fixes
  • Peer-review code created by team members

We recognize that people come with a wealth of experience and talent beyond just the technical requirements of a job. If your experience is close to what you see listed here, please still consider applying. Diversity of experience and skills combined with passion is a key to innovation and inspiration. Therefore, we encourage people from all backgrounds to apply to our positions. Please let us know if you require accommodations during the interview process.

Equal Opportunity Employer: Minorities/Women/Protected Veterans/Disabled

EEO is The Law – click here for more information

Brunswick and Workday Privacy Policies

Brunswick does not accept applications, inquiries or solicitations from unapproved staffing agencies or vendors. For help, please contact our support team at: or 866-278-6942.

All job offers will come to you via the candidate portal you create when applying through a posted position through https:///  If you are ever unsure about what is being required of you during the application process or its source, please contact HR Shared Services at 866-278-6942 or

Share This

Enjoy the article so far? Recommend it to your friends and peers.