Namaste!

I am Ruturaj Ramchandra Shitole, a Software Engineer currently studying at George Washington University. My interests primarily lie in the field of MLOps, where I strive to create user-friendly ML products that cater to the needs of businesses and consumers alike. I believe that being adaptable is crucial in the fast-paced tech industry, and I am always open to new ideas and ways of thinking. My unique approach to problem-solving allows me to think outside the box and come up with creative solutions. Additionally, I am adept at learning abstract concepts and applying them across different domains, which helps me stay up-to-date with the latest trends and developments in the tech world.


Work Experience

Organization: Syncron

Position: Software Engineer

Duration: Jan 2022 - Dec 2023

Role:

  • Member of the Datalab and Syncron.AI team
  • MLOps
  • Acting Data Scientist in the team
  • Understanding the maths behind complex statistical and ML solutions, and building applications and end-to-end pipelines products
  • Creating and maintaining infrastructure and applications to support ML Lifecycle

Tech Stack: AWS, Terraform, Kubernetes, Docker, Kubeflow, Python

My job as a Software Engineer has evolved to encompass the infrastructure aspect of the MLOps Domain. Through this, I have become proficient in utilizing Kubernetes and other CNCF-approved resources to create tools and infrastructure that support the Data Science and ML Lifecycle. Working with Kubernetes and its associated tools has sparked my interest in exploring their potential application in serving ML models as products.

The research paper titled Hidden Technical Debt in Machine Learning Systems by Google researchers has been a source of inspiration for me. The paper highlights the challenges of implementing ML systems in practical applications. The paper's diagram portrays the "ML Code" component as relatively small, which underscores the need for extensive post-coding work to realize its full potential. This principle resonates with my team's approach to providing value to our organization. My responsibilities also include designing and developing microservices that support the ML workflow and exploring various ML tools through experimentation.

Another fascinating aspect of my work involves constructing advanced analytics models for new price-related applications. This particular facet of my job demands a deep understanding of complex mathematical and ML concepts, which I then translate into programming and visualization. The models are implemented as end-to-end pipelines with analytical dashboards that visualize the results. One notable contribution of mine was the development of a solution called Top-Down Price Optimization. This solution combined enterprise pricing with multivariate calculus and business rules to provide a tool for strategically adjusting prices. The objective was to assist businesses in optimizing prices to achieve a desired increase in revenue and maximize profits while adhering to relevant business constraints.

Organization: Syncron

Position: Associate Software Developer

Duration: Jan 2021 - Dec 2021

Role:

  • Part of the Datalab and Analytics BI team
  • Designing and maintaining end-to-end statistical and ML Pipelines
  • Developing and maintaining Analytical and BI products and tools around them

Tech Stack: AWS, Docker, Kubeflow, Python

After completing a year-long internship, I was promoted to the position of Associate Software Developer and my responsibilities expanded to include the MLOps domain. In addition to developing and maintaining BI tools and services, I worked on developing price-related use cases. This involved understanding business requirements, analyzing customer data, and developing end-to-end pipelines to handle the entire process, from reading raw data to generating analytical dashboards for customers.

As a member of the Datalab team at Syncron, I gained valuable knowledge about applying Object-Oriented Programming principles beyond programming. Concepts such as Abstraction and Polymorphism were used in designing and planning the architecture. Additionally, I became familiar with the concept of functional programming, which allowed me to write code in terms of functions that were more readable and easy to follow.

Working on planning and designing solutions also exposed me to the concept of Domain-Driven Design. I learned how to design a solution based on the problem's domain, and then code it using the principles of OOP and functional programming.

Challenges Faced

  • Lack of understanding of business finances: The statistical and ML pipelines I had built used a transactional data from business. While building the solution, I encountered a lot of issues with the data which I thought were outliers and discarded them from being fed to the successive steps in the pipelines. This was realised later when the pipelines were generating wrong analysis data. I had to discuss with a lot people from the business end to fix those issues.
  • Lack of understanding important technologies: Being new to the Datalab team, I had no knowledge of Kubernetes, Networking, and Infrastructure. I had to learn them from start. I got a lot of help from my colleagues.

Lessons Learnt

  • Perspective Thinking is important: I could not have achieved anything if I was shy in asking for help.
  • Product First Development: There is a hard lesson for every developer working in tech industry: no matter how much technically advanced product you provide, it does not matter if in the end the customer is not getting value out of it. A lot of times my highly optimized code had to be discarded because it did not deliver the business requirements correctly.
  • Learn, Unlearn, and Relearn: I have worked with leaders in my current organization who are accepting of the new trends in technology, even if those developments defy their learnings and past experiences. They are able to analyze the new technologies objectively and fairly, and then decide whether to adapt them or not. And are not afraid of rebuilding their knowledge from scratch. Everything they encounter is a learning experience for them. Being in the company of such people has inspired me to also develop such an attitute towards new developments.

Organization: Syncron

Position: Intern

Duration: Jan 2020 - Dec 2020

Role:

  • Part of the Analytics BI team
  • Developing and maintaining Analytical and BI products and tools around them
  • Developing and mainting microservices

Tech Stack: AWS, Serverless, Docker, Python

My internship provided my first professional exposure to the world of IT, where I gained valuable knowledge regarding software development and deployment as a service. I was able to hone my skills in writing readable and maintainable code while working extensively with Python, Serverless, and Docker. The software I developed primarily served to facilitate business intelligence work for customers, providing me with valuable insights into how data is visualized to meet business needs and draw insights.

Moreover, I had the opportunity to be involved in the design and development of microservices for internal use by various teams, which allowed me to expand my technical proficiency further.

Challenges Faced

  • Lack of knowledge on professional software development: I did not know about how a deliverable code is written for professional use. I was able to overcome this by talking to my colleagues, finding out the resources they used for learning such things, and discussing with them.
  • Lack of experience in cloud development: Before my internship I had no experience in working with cloud providers. Being a SaaS(Software as a Serivice) B2B company, we provided our services via cloud only, so learning about cloud was a must. And there are hundreds of services in cloud. To get started, my Team Lead directed me to learn about basic cloud resources. After I got comfortable with it, I was able to learn on my own. Needless to say, I made a lot of mistakes in the process.
  • Lack of support from existing libraries: There was one use case where we had to generate a custom formatted excel. The formatting required freezing certain columns and numerical formatting on numbers like custom separators. Existing libraries provided very little support for using our own separators. To solve it, I had to read through the documentation, performing a lot of experiments and test to finally arrive at a solution.

Lessons Learnt

  • Don't be ashamed to ask for help: I could not have achieved anything if I was shy in asking for help.
  • Code structuring and documentation carries a lot of value: Learnt it the hard way when I had to go through undocumented legacy code.


Education

Degree: Master of Science

Major: Computer Science

University: School of Engineering and Applied Sciences, George Washington University

Duration: January 2024 - December 2025

Degree: Bachelor of Engineering

Major: Computer Science

University: Bangalore University

Duration: August 2016 - September 2020


Projects and Competitions

Opentelemetry and Kubernetes are two of the most popular CNCF projects. Kubernetes is an already accepted solution for a variety of orchestration usecases, it has been tried and tested for deploying a cluster of microservices at different scales. In a distributed environment with multiple microservices, a kubernetes cluster can get quite complex as the microservices interact with each other. In such a setup, if there is no uniform layer for observability, the MTTD (Mean Time To Detect) and the MTTR (Mean Time To Resolve) can significantly go up. Having a standard observability layer would help in this situation by allowing for fetching the telemetry data (metrics, traces, logs) and displaying it in a dashboarding tool such as Grafana to trace out the flow and pinpoint the problem. Opentelemetry is aimed at providing a standard protocol and tools for observability. It defines the opentelemetry specification for instrumenting telemetry data. The benefit of having a standard specification for telemetry data is in using the data with other tools down the line. The instrumented data can then be integrated with tools like Jaeger, Zipkin, Prometheus, etc. or with some vendor-specific tools.

This project was done as a part of my dissertation. Our motive was to create a framework which would allow for recognition of handwritten characters, and would require minimal training to incorporate new classes. We observed that even though supervised learning models work well at performing tasks like classification, they require a lot of data for training. This is fine for classification tasks where a lot of samples for each class can be gathered easily. However, on training these networks for applications having sparse data samples, these networks don't perform well. For such tasks, we proposed a framework that uses semi-supervised learning techniques.

We also divide the dataset into three categories: base, novel, and validation. The base dataset is the one where classes have a lot of samples available for training. We use 24 of the 47 character classes from the Kannada Abugida (alphabets). The novel dataset has fewer samples and is used to test the incorporation of unseen classes during finetuning. It contains 12 character classes. The validation dataset is used for validation of the entire framework and contains 11 character classes.

The first step consists of pretraining, where trains the network to learn good feature representation on the base class. This is followed by episodic finetuning on novel dataset, this tunes the network to incorporate unseen classes usign 1 and 5 shot learning. After finetuning we use the validation dataset to validate the network on the validation dataset.

We were able to achieve an accuracy of 99.13%, while using less data.

Summit is the part of National Annual Technical Fest.

Summit is a student parliament where participants present their ideas and views on a specific topic to the judges and audience present. Which is conducted in two rounds: in the first one the team has to present a solution to a problem statement provided before the event, in the second the participants have to present a solution to a problem statement provided on spot by the judges.

Secured Second Place

Presented ComPhy: Computer Enabled Physiotherapy System.

Proposed architecture for physiotherapy using Computer Vision and Range-of-Motion analysis.

61 Rank in ACM ICPC Asia Regional - IIT Kharagpur

Theme: Cross A Crater

Programming Languages: Python, embedded-C

Tools and Libraries: OpenCV, Numpy

Hardware: Firebird-V (ATMEGA-2560), XBee, Servo Motors

In 2016, I participated in the E-yantra Robotics Competition, alongside my friends, where we were given the theme "Cross A Crater." Our task was to program a robot that could cross a crater on a far-off planet by analyzing the two paths around it.

The competition was split into two stages, each with multiple tasks. The first stage involved learning about computer vision and algorithms, which was daunting since we lacked experience in both areas. Additionally, we had to code the solutions in Python, which was unfamiliar to us at the time. Nevertheless, we overcame these challenges as a team and completed all the tasks, including the last one, which was the most challenging. It required us to use computer vision to detect a maze, solve it for the shortest path from one end to another, and document the process in code.

In addition to coding solutions, we had to ensure the readability of the code and create relevant documentation, which further tested our skills. We were able to complete all the tasks in the first stage and advance to the second stage.

The second stage demanded that we construct an arm for a robot provided by the competition's hosts and program the robot to operate it. We also had to detect an arena using a webcam and find the shortest path while avoiding cavities around the crater. The arena had two paths that had to be filled with boulders, which the robot had to pick up one-by-one and fill the cavities to reach the other end.

The challenges in the second stage included scanning the arena with a webcam, determining the shortest path across the crater, and generating robot commands based on that path. While we were able to scan the arena and create the robot commands, we could not complete the task on time, as the robot kept diverging from the path. However, the arm we developed was so robust that the robot could lift itself using the arm and perform one-armed push-ups.

Overall, the competition was an engaging and challenging experience. I learned valuable technical skills, teamwork, and the satisfaction of working hard for a goal.



To know more about me, visit the .

Hobbies and Interests

I have a strong passion for reading non-fiction books. I am drawn to a diverse range of subjects, including life, finance, physics, biology, and design, as I believe that knowledge can be gained from any area and applied universally. In addition, I have a particular interest in mystery and thriller fiction books. On occasion, I also enjoy reading self-help books, which allow me to enhance and refine my lifestyle.

Doodling is a favorite pastime of mine as it not only helps me express my creativity but also provides a sense of freedom and relaxation. The simple act of drawing and sketching helps me unwind and de-stress, while also allowing me to explore my artistic side without any pressure or expectations.

Playing casual games is one of my favorite ways to unwind and tap into my creativity. It's a great way to relax and de-stress while also allowing me to explore my imaginative side.

What I find captivating about Formula 1 is the sheer speed and precision of the cars, which require the drivers to possess exceptional skill and competitive drive. Additionally, the intense rivalries between teams and their drivers add another layer of excitement to the sport. Formula 1 is also highly technical, with even minor advancements in technology and engineering providing a significant competitive advantage, making it an ever-evolving and thrilling spectacle.