DevOps Engineer

image/svg+xmlOpenclipartGoogle Places2011-12-27T11:22:53 Novi Sad / Remote

We are looking for a DevOps/Infrastructure Engineer to join our team!

This is what we do:

End to end data solutions, data pipelines, data manipulation, ingestion and integration of multiple data sources, data processing in an optimal way, data storage, data access/presenting, reporting, and data providing for data science/BI/ML team to work with the right data in a performant fashion.

End-to-end data solution is important for us so we tend to automate and monitor infrastructure, orchestrate data flows, version and deploy ML models therefore we need skillful infrastructure engineers in our team.

Some of the technologies we like to use:

  • Cloud infrastructure management (mostly AWS, but we work on Azure and GCP)
  • On-prem infrastructure deployment & operations
  • Terraform, Ansible and cloud provided automation tools (such as Cloud Formation for AWS) to automate infrastructure creation and configuration
  • Monitoring tools so we know what is happening (Graphana, Cloud provided monitoring, Prometheus, ELK)
  • Orchestration (multi step process such as ETL or data pipeline needs to be automated so technologies such as Nifi, Airflow help a lot here)
  • Automation (deployment, operations) of data storage and processing technologies such as Cassandra, Druid, Kafka, Spark, Flink, Presto.


  • You are an infrastructure engineer. That means to us that you understand why it is important to test and monitor the software, automate deployment, learn the hardware requirements, understand engineering basics such as networking, OS system, cloud vs. on-premise, etc.
  • You have 3+ years of experience in DevOps
  • You have experience in some scripting language (bash, Python)
  • You have proven experience with one or more monitoring stacks
  • You have proven experience with one or more automation tools
  • You know what MLOps is(or you have a desire to learn how to), and you want to apply
    your DevOps skills to Machine Learning projects
  • You already know / would like to learn how to operate and manage distributed technologies such as Kafka, Cassandra, Spark, Flink, the-latest-and-greatest-you name it-tech, etc
  • You worked on a project with Scrum organization so you know how to prepare for daily, demo, retrospective, refinement and you know how to manage your work transparently through the issue tracking tool
  • You like learning new things and are open to exploring different approaches to solutions
  • You like a healthy balanced relationship between work and free time – we don’t like overtime
  • You know when and how to ask for help – we’re here to work together We offer:
  • Clearly defined pay grades: from L1 (talented junior) to L5 (a senior who is an expert in at least one technology we use)
  • Career path that connects these grades – you know where your life is going (at least here with us)
  • Loyalty coefficient: 10% on net compensation after 3 years in SmartCat, 20% on net compensation after 5 years
  • Knowledge budget: extra money for conferences, books, and training of your choice
  • Flexible working hours and work from home
  • End-of-the-Year bonus program
  • Full transparency – information about levels and salaries, company strategy, financial reports, and beyond
  • Full support towards gaining knowledge and expertise
  • An excellent team of senior engineers and data scientists