[{"content":" Objective As a member of this project, I contributed to the development of a solution addressing the critical need for efficient railway track maintenance and inspection.  Tech Stack   Robot Operating System (ROS) Gazebo MATLAB NGINX Tensorflow Sensors: MPU 6050, GPS, Laser Scanner(LIDAR)    Key Features  Regular maintenance and inspection are vital for ensuring the safety and effectiveness of railway transportation. In India, where manual inspections are the norm, our project introduces an autonomous 4-wheeled robot designed for comprehensive railway track defect detection.  Capable of performing ultrasonic Nondestructive Test (NDT) for internal crack detection and 3D-laser profiling for surface cracks, gauge length, and ballast profile inspection. Equipped with a machine vision system (camera) for anomaly detection in fasters and sleepers. Autonomous inspection of tracks for defects in Ballast, Rails, Sleepers, and Fasteners, utilizing various testing methods. Integrated GPS, odometer, and IMU sensors for precise robot positioning and status monitoring. Inspection data, including ongoing capture and defect details, transmitted to a user-friendly web server. Web server accessible by inspection personnel for real-time monitoring and analysis, including live camera feed, laser track profiling, and other inspection data. Automated alerts to base stations for detected defects, providing spatio-temporal coordinates and defect details.  The use of ROS Gazebo, MATLAB, NGINX, and Tensorflow in our tech stack allowed us to develop a robust and efficient digital twin for autonomous railway track inspection, reducing human effort and enhancing accuracy.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/aribot/","summary":"Objective As a member of this project, I contributed to the development of a solution addressing the critical need for efficient railway track maintenance and inspection.  Tech Stack   Robot Operating System (ROS) Gazebo MATLAB NGINX Tensorflow Sensors: MPU 6050, GPS, Laser Scanner(LIDAR)    Key Features  Regular maintenance and inspection are vital for ensuring the safety and effectiveness of railway transportation. In India, where manual inspections are the norm, our project introduces an autonomous 4-wheeled robot designed for comprehensive railway track defect detection.","title":"Aribot"},{"content":" Project Overview As the primary member spearheading this project, the focus is on continuous monitoring and prediction of faults in machine elements, specifically shafts and bearings in large machinery like gas turbines. The project leverages real-time vibration-based sensor data and employs intelligent analytics, particularly exploring Machine Learning techniques for fault detection and diagnosis.  Objectives and Contributions The primary objectives of this research include:  Continuous monitoring of machine elements to predict and prevent catastrophic failures. Exploration of suitable Machine Learning algorithms for fault detection in vibration-based signals. Investigation of dimensionality reduction techniques for identifying single faults in a cracked rotor. Exploration of clustering algorithms for detecting multiple faults in bearings.  The key contributions of this work are in advancing the understanding and application of Machine Learning techniques in fault detection, particularly showcasing close agreements with experimental observations.\n Significance and Problem Statement Machine elements in large machinery are prone to faults due to continuous operation and harsh conditions, necessitating continuous monitoring and early detection to prevent catastrophic failures. The project addresses this critical need by leveraging real-time vibration-based sensor data and applying statistical algorithms to decipher fault-related features.  Problem Statement  The need for continuous monitoring and early fault detection in machine elements, such as shafts and bearings, to avoid catastrophic failures and ensure uninterrupted operation.  Methodology and Technology Stack  The project explores four dimensionality reduction techniques for single fault identification in a cracked rotor and investigates two clustering algorithms for multiple faults in bearings. The datasets used are collected from experimental facilities, providing real-world scenarios for algorithm testing. The technology stack includes:  Machine Learning Algorithms: Employed for fault detection and diagnosis in vibration-based signals. Dimensionality Reduction Techniques: Explored for identifying single faults in a cracked rotor. Clustering Algorithms: Investigated for detecting multiple faults in bearings. Experimental Datasets: Collected from suitable facilities, ensuring realistic testing scenarios.   Conclusion The close mutual agreement between the algorithms and experimental observations in predicting the onset of faults in both rotor and bearing datasets showcases the effectiveness of the applied Machine Learning techniques. This research contributes to the advancement of fault detection methodologies, particularly in the context of large machinery with critical machine elements.   Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n   ","permalink":"https://kailashjagadeesh.netlify.app/projects/undergrad_thesis/","summary":"Project Overview As the primary member spearheading this project, the focus is on continuous monitoring and prediction of faults in machine elements, specifically shafts and bearings in large machinery like gas turbines. The project leverages real-time vibration-based sensor data and employs intelligent analytics, particularly exploring Machine Learning techniques for fault detection and diagnosis.  Objectives and Contributions The primary objectives of this research include:  Continuous monitoring of machine elements to predict and prevent catastrophic failures.","title":"Predictive Analytics for Structural Health Monitoring (Undergraduate Thesis)"},{"content":" Objective As the mentor for this project, I provided guidance in developing a system for creating editable 3D models of real-world objects using a minimal set of images, with a focus on leveraging Structure from Motion and Neural Radiance Fields.  Technology Stack  NeRF Tensorflow Pytorch Deep Learning    Key Features  The project aims to revolutionize 3D reconstruction in virtual space, finding applications in scene rendering and self-driving cars. Key features include:  User-Input Image Processing: Users provide images of the object, initiating the process. Structure from Motion (SfM): Determines the camera\u0026rsquo;s relative position from the object, calculating relative poses. Neural Radiance Fields (NeRF): Utilizes deep learning concepts to output RGB color and opacity alpha of each voxel in 3D space. Rendering Equation: Integrates NeRF output to render the item\u0026rsquo;s volume in the 3D space. Editable 3D Models: Creates editable 3D models from limited input images.  This project, powered by NeRF, Tensorflow, Pytorch, and Deep Learning, stands at the forefront of 3D reconstruction technology, offering a versatile solution for generating detailed and editable 3D models from minimal visual input.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/sparo/","summary":"Objective As the mentor for this project, I provided guidance in developing a system for creating editable 3D models of real-world objects using a minimal set of images, with a focus on leveraging Structure from Motion and Neural Radiance Fields.  Technology Stack  NeRF Tensorflow Pytorch Deep Learning    Key Features  The project aims to revolutionize 3D reconstruction in virtual space, finding applications in scene rendering and self-driving cars.","title":"SPARO"},{"content":" Objective  As the primary mentor for this project, our goal was to cater to the needs of visually impaired individuals by creating a wearable, cost-effective device that provides audio instructions based on the user's surroundings.  Tech Stack   Deep Learning Models: CLIP and LXMERT Computation Platforms: Raspberry Pi, Jetson Nano Sensors: Time-of-flight-based LIDAR, MPU6050 Safety Features: Fall detection, GPS tracking, Obstacle detection Power Source: Li-Po Battery    Key Features   Wearable prototype offering audio instructions for enhanced user understanding. Utilizes CLIP and LXMERT deep learning models for image captioning and visual question answering. Wireless camera on the headband captures images, processed on Raspberry Pi or Jetson Nano. Safety features include fall detection via MPU6050 and obstacle detection through LIDAR. Vibration feedback from haptic motors helps users navigate obstacles effectively. GPS tracking for location-based services and emergency contacts notified in case of a fall.  This device not only addresses the accessibility needs of the visually impaired but does so with an emphasis on safety and affordability. The combination of cutting-edge technology and thoughtful design makes it a promising solution in assistive technology.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/anvi/","summary":"Objective  As the primary mentor for this project, our goal was to cater to the needs of visually impaired individuals by creating a wearable, cost-effective device that provides audio instructions based on the user's surroundings.  Tech Stack   Deep Learning Models: CLIP and LXMERT Computation Platforms: Raspberry Pi, Jetson Nano Sensors: Time-of-flight-based LIDAR, MPU6050 Safety Features: Fall detection, GPS tracking, Obstacle detection Power Source: Li-Po Battery    Key Features   Wearable prototype offering audio instructions for enhanced user understanding.","title":"ANVI"},{"content":"Objective As a dedicated member of this project, I played a crucial role in developing Pepper, a versatile personal assistant platform utilizing various artificial intelligence algorithms.\n Technology Stack  SLAM Microsoft Kinect ROS Gazebo Deep Learning    Robot Design The robot's chassis, designed using Autocad and fabricated with acrylic, features three layers housing components such as the battery, driver, gripper arm, and onboard computer. Microsoft Kinect is mounted on the top pedestal, while high-torque motors and a castor wheel ensure stable movement.  Mapping Pepper optimally navigates its environment by creating a map using the onboard Kinect sensor and the gmapping ROS package, employing the Simultaneous Localisation and Mapping Algorithm (SLAM).  Localisation To address wheel drift, we implemented the Kalman filter algorithm, combining data from multiple sensors, including encoders and a gyroscope, for accurate pose estimation.  Speech Recognition Utilizing a text-independent machine learning algorithm, Pepper identifies speakers through voice samples, employing Mel Frequency Cepstral Coefficient (MFCC) Vectors for training and real-time recognition.  Object Recognition and Pick up Trained with a Convolutional Neural Network (CNN) on 1000 object classes, Pepper captures and measures object depth using Kinect. The robotic arm, equipped with 3 degrees of freedom, approaches objects based on depth, and the gripper, with 1 DoF, picks up recognized objects. This project seamlessly integrates SLAM, Microsoft Kinect, ROS Gazebo, and Deep Learning to empower Pepper as an advanced personal assistant, capable of efficient navigation, speech recognition, and intelligent object interaction.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/pepper/","summary":"Objective As a dedicated member of this project, I played a crucial role in developing Pepper, a versatile personal assistant platform utilizing various artificial intelligence algorithms.\n Technology Stack  SLAM Microsoft Kinect ROS Gazebo Deep Learning    Robot Design The robot's chassis, designed using Autocad and fabricated with acrylic, features three layers housing components such as the battery, driver, gripper arm, and onboard computer. Microsoft Kinect is mounted on the top pedestal, while high-torque motors and a castor wheel ensure stable movement.","title":"Pepper"},{"content":"Project Overview As the primary member leading this project, the focus is on developing an open-source quadcopter platform for advancing research in drone autonomy. The project encompasses the implementation of various deep learning and computer vision algorithms, including person tracking, gesture control using human pose estimation, optical flow stabilization, obstacle avoidance, and depth estimation using monocular vision.  Objectives and Contributions  The primary objectives of this project include:\n Building an open-source quadcopter platform for research in drone autonomy. Implementing deep learning and computer vision algorithms for person tracking, gesture control, optical flow stabilization, obstacle avoidance, and depth estimation. Utilizing a Pixhawk flight controller with Raspberry Pi as a companion computer for efficient control. Employing DJI Flame Wheel-450 for the quadcopter frame, customized with additional mountings for extra components.    Technology Stack The technology stack for this project includes:\n Pixhawk Flight Controller Raspberry Pi ROS (Robot Operating System) Gazebo Simulation Docker Containers   Project Implementation The Raspberry Pi runs a ROS node, establishing communication with another ROS node on the host PC to transfer videos over Wi-Fi. To ensure the project\u0026rsquo;s open-source nature and ease of development, the simulation environment setup is dockerized using Docker containers. The ongoing development involves implementing and testing algorithms within the Gazebo Simulation.\n Conclusion This project contributes to the field of drone autonomy research by providing an open-source platform with advanced features such as person tracking, gesture control, optical flow stabilization, obstacle avoidance, and depth estimation. The utilization of industry-standard components like Pixhawk and Raspberry Pi enhances the project\u0026rsquo;s accessibility and reproducibility.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n   ","permalink":"https://kailashjagadeesh.netlify.app/projects/openquad/","summary":"Project Overview As the primary member leading this project, the focus is on developing an open-source quadcopter platform for advancing research in drone autonomy. The project encompasses the implementation of various deep learning and computer vision algorithms, including person tracking, gesture control using human pose estimation, optical flow stabilization, obstacle avoidance, and depth estimation using monocular vision.  Objectives and Contributions  The primary objectives of this project include:\n Building an open-source quadcopter platform for research in drone autonomy.","title":"OpenQuad"},{"content":" Objective  As the mentor for this project, I guided the team in developing a solution focused on addressing the challenges of obstacle mapping in robotic networks.  Technology Stack   ESP8266 Compressive Sensing Yagi Antenna Robotic Cooperative Network    Key Features  Obstacle mapping is essential for the robust operation of robotic networks, yet existing approaches struggle with mapping occluded objects. Our project, LEWI (Localization and mapping of Enclosed space using Wi-Fi signals), leverages the unique properties of Wi-Fi signals to map objects that are traditionally challenging to detect.\n Utilizes compressive sensing, a novel algorithm capable of reconstructing signals from incomplete observations, to efficiently map occluded obstacles with minimal data. Explores the ability of Wi-Fi signals to pass through and decay through objects, enabling the mapping of obstacles not directly visible to the robot. Achieves accurate localization and shape determination of occluded objects within enclosed spaces. Designed for scenarios where a group of Unmanned Air Vehicles (UAVs) or robots needs to cooperatively build an aerial map within a limited timeframe. Addresses the practical constraints of delay-sensitive applications by minimizing the required measurements while still providing comprehensive obstacle mapping.  Our project, driven localization using Wi-Fi signals, pushes the boundaries of obstacle mapping in robotics, offering a solution that is both efficient and effective.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/lewi/","summary":"Objective  As the mentor for this project, I guided the team in developing a solution focused on addressing the challenges of obstacle mapping in robotic networks.  Technology Stack   ESP8266 Compressive Sensing Yagi Antenna Robotic Cooperative Network    Key Features  Obstacle mapping is essential for the robust operation of robotic networks, yet existing approaches struggle with mapping occluded objects. Our project, LEWI (Localization and mapping of Enclosed space using Wi-Fi signals), leverages the unique properties of Wi-Fi signals to map objects that are traditionally challenging to detect.","title":"LEWI"},{"content":"Objective As the mentor for this critical project, I provided guidance in developing a solution aimed at addressing the challenges of tracking and monitoring wildlife without the limitations posed by existing technologies.  Technology Stack  MATLAB FMCW Technology mm Waves Fourier Transformations    Key Features The project focuses on using Frequency Modulated Continuous Wave (FMCW) radar to detect and estimate vital signs, such as respiration and heartbeat frequencies, in wildlife without the need for physical devices. Key features include:  Non-Invasive Monitoring: Utilizes FMCW radar for non-contact measurement of vital signs, eliminating the need for devices worn by animals. Wide Field Coverage: Offers a broader field for tracking both animal and human movements, crucial for preventing poaching and hunting in wildlife sanctuaries. Real-Time Tracking: Provides real-time tracking of vital signs through radar technology, enhancing monitoring capabilities. Weather-Resilient: Works in all weather conditions, eliminating constraints faced by some existing models. Hardware Utilization: Texas Instruments IWR6843AOPEVM hardware is employed, leveraging its capabilities in mm waves and Fourier transformations. Simulations and Applications: MATLAB is used for simulations of heartbeat and respiration rates, and the project explores applications in motion tracking, contour detection, and area scanning.  The project marks a significant advancement in wildlife monitoring, offering a more effective and non-intrusive solution for preserving and protecting animal species in their natural habitats.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/star/","summary":"Objective As the mentor for this critical project, I provided guidance in developing a solution aimed at addressing the challenges of tracking and monitoring wildlife without the limitations posed by existing technologies.  Technology Stack  MATLAB FMCW Technology mm Waves Fourier Transformations    Key Features The project focuses on using Frequency Modulated Continuous Wave (FMCW) radar to detect and estimate vital signs, such as respiration and heartbeat frequencies, in wildlife without the need for physical devices.","title":"STAR"},{"content":" Objective As a dedicated member of this transformative project, I contributed to the development of Supernumerary Robotic Fingers, a wearable robot designed to enhance the capabilities of the human hand for performing a range of prehensile, bimanual, and manipulation tasks.  Technology Stack  Machine Learning Control Systems Rehabilitative Robotics Dynamics    Key Features The wearable robot serves as an active compensatory tool, particularly beneficial in the early stages of therapeutic recovery and rehabilitation. Key features include:  Enhanced Grasping Abilities: Designed to augment hand functions, aiding patients in recovering grasping abilities. Intuitive Control: Flex sensors on the patient\u0026rsquo;s fingers, integrated into a hand glove, and an Inertial Measurement Unit (IMU) enable intuitive control of the two robotic fingers. Therapeutic Recovery: Facilitates arm use even in the absence of fully recovered hand grasp function. Chronic Hemiparetic Support: Offers assistance for chronic hemiparetic patients to lead independent and productive lives. Rehabilitation Device: Intended as a tool to assist in bimanual tasks, such as grasping and manipulating objects.  This project, employing Machine Learning, Control Systems, Rehabilitative Robotics, and Dynamics, represents a significant stride in assistive technology, providing innovative solutions for individuals in therapeutic recovery seeking enhanced hand functionality.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/srf/","summary":"Objective As a dedicated member of this transformative project, I contributed to the development of Supernumerary Robotic Fingers, a wearable robot designed to enhance the capabilities of the human hand for performing a range of prehensile, bimanual, and manipulation tasks.  Technology Stack  Machine Learning Control Systems Rehabilitative Robotics Dynamics    Key Features The wearable robot serves as an active compensatory tool, particularly beneficial in the early stages of therapeutic recovery and rehabilitation.","title":"Supernumerary Robotic Fingers (SRF)"},{"content":" Objective As the mentor for this impactful project, I provided guidance in developing a solution that addresses communication challenges faced by individuals with speech disabilities. The project focuses on creating a sign language-to-speech converter device, offering a more efficient and versatile means of communication.  Technology Stack  Sony sPresence nRF24L01+ Network Sensor Fusion Deep Learning    Key Features The sign language-to-speech converter device is designed to enhance communication for individuals with speech disabilities. Key features include:  Sign Language Decoding: Utilizes finger movements and angular hand positions for decoding signs into specific words. Custom Voice Set: Feeds decoded words to a speaker with a custom voice set, improving speech efficiency and accuracy. Personalized Calibration: The system is tuned and calibrated to an individual\u0026rsquo;s hand and finger movements, enhancing accuracy and efficiency. Input Sensors: Flex sensors and Inertial Measurement Units (IMU) capture and process input signals from sign language gestures. Gesture Tracking: IMU data is used to predict hand angles, enabling accurate tracking of gestures. Mobile and Unrestrictive: Battery-powered design ensures mobility and unrestricted use in various communication scenarios.  This project, incorporating Sony sPresence, nRF24L01+ Network, Sensor Fusion, and Deep Learning, represents a significant advancement in assistive technology, offering a streamlined and effective communication tool for individuals with speech disabilities.\n  Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n     ","permalink":"https://kailashjagadeesh.netlify.app/projects/ssc/","summary":"Objective As the mentor for this impactful project, I provided guidance in developing a solution that addresses communication challenges faced by individuals with speech disabilities. The project focuses on creating a sign language-to-speech converter device, offering a more efficient and versatile means of communication.  Technology Stack  Sony sPresence nRF24L01+ Network Sensor Fusion Deep Learning    Key Features The sign language-to-speech converter device is designed to enhance communication for individuals with speech disabilities.","title":"SSC"},{"content":" Objective As the primary member driving this project, our focus is on optimizing component test benches in the endurance testing of automotive components, specifically exploring the testing of the automotive throttle position sensor (TPS) using fiducial markers and image processing techniques. This involves benchmarking existing End Of Line (EOL) setups and test benches, designing an efficient machine vision-based testing system, and implementing it as a cost-effective alternative. The key contributions of this research include:\n Scalability and Cost Effectiveness: The proposed methodology is highly scalable and cost-effective while ensuring the required testing accuracy. Advancement in Computer Vision: Contribution to advancing computer vision techniques, particularly in optimizing testing methodologies for bulk manufacturing industries like automotive manufacturing.    Significance and Problem Statement In the automotive industry, component testing is crucial for ensuring vehicle safety, efficiency, and effectiveness. Traditional methods, such as human observation, are limited in accuracy due to factors like fatigue and subjectivity. Machine vision emerges as a valuable tool, providing an automated and reliable process for capturing and analyzing data, thereby revolutionizing testing methodologies within production processes.\n Methodology and Technology Stack The machine vision-based testing system utilizes fiducial markers, a monocular camera, a laptop for video feed capture and processing, motor actuators (without encoders), and a power supply. The methodology involves benchmarking existing setups, designing the vision-based system, and implementing it to accurately and efficiently test TPS components. The technology stack includes:  Machine Vision: Utilized for capturing and analyzing data. Fiducial Markers: Aid in tracking and calibration. Monocular Camera: Captures video feed for analysis. Motor Actuators: Without encoders, contributing to cost-effectiveness. Power Supply: Essential for system functionality.   Conclusion The successful development of this machine vision-based testing system promises to revolutionize the testing of TPS components, ensuring reliable and high-quality components are integrated into vehicles during the production process.\n ","permalink":"https://kailashjagadeesh.netlify.app/projects/auto_testbench/","summary":"Objective As the primary member driving this project, our focus is on optimizing component test benches in the endurance testing of automotive components, specifically exploring the testing of the automotive throttle position sensor (TPS) using fiducial markers and image processing techniques. This involves benchmarking existing End Of Line (EOL) setups and test benches, designing an efficient machine vision-based testing system, and implementing it as a cost-effective alternative. The key contributions of this research include:","title":"Development and Optimisation of Automotive Testing using machine vision"},{"content":"Summary  Project M1: Roadster Bike  May 2023 - Present\n Led benchmarking efforts and engaged in supplier interactions for purchased systems, focusing on suspension and powertrain components. Designed and prototyped the M1 roadster model\u0026rsquo;s mule vehicle, emphasizing powertrain control and throttle integration. Collaborated on the in-house ABS system development with the brakes and system integration team.   Project S1 Air: Electric Scooter  December 2022 - August 2023\n Conceptualized a cost-effective resistive transducer for the throttle, resulting in a cost reduction of 105 INR. Redesigned the electronic steering column lock (ESCL) and electronic seat latch actuator, achieving cost reductions of 300 INR and 256 INR. Established automated test setups for component-level testing, contributing to DVP and DFMEA documentation for supplier production. Diagnosed and resolved a horn honking issue through FFT analysis, implementing immediate corrective actions on the manufacturing line.    Project C1: Electric Car  August 2022 - January 2023\n Developed the initial LV electrical and network architecture, conducting a detailed study on ADAS network and bandwidth requirements. Led system benchmarking, prepared RFIs and RFQs, ensuring alignment with industry standards like ISO 26262 (Functional Safety) and ISO 22737 (ADAS). Created a comprehensive sensor feature mapping report, receiving direct review and appraisal from the CEO. Played a key role in ECU packaging within the design mockup, maintaining regular supplier interactions for technical discussions.   Project S1 Pro: Electric Scooter  July 2022 - July 2023\n Designed and developed a tire pressure monitoring system, including hardware sensor selection and compatibility with vehicle software. Identified and rectified a critical bug within the vehicle CAN bus system, successfully fixing it through over-the-air updates for on-road vehicles. Developed a replacement side stand sensor based on a hall effect sensor, resolving on-road issues and conducting a 10,000 km endurance test. Conducted diagnosis and root cause analysis for on-road issues and customer complaints using CAN and telematics data.    Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n  ","permalink":"https://kailashjagadeesh.netlify.app/experience/ola_electric/","summary":"Summary  Project M1: Roadster Bike  May 2023 - Present\n Led benchmarking efforts and engaged in supplier interactions for purchased systems, focusing on suspension and powertrain components. Designed and prototyped the M1 roadster model\u0026rsquo;s mule vehicle, emphasizing powertrain control and throttle integration. Collaborated on the in-house ABS system development with the brakes and system integration team.   Project S1 Air: Electric Scooter  December 2022 - August 2023","title":"Assistant Manager- Electrical \u0026 Electronics Systems Engineer (full Time)"},{"content":"Summary  Worked on the implementation of predictive analytics for super-critical machine elements during the INSPIRE Fellowship Program at Tata Steel Ltd. Studied the operation of the belt drive systems at Tata Steels and identified the bearings in the pulleys of the belt drive as a potential point of failure. Contributed to the development of the monitoring system and lubrication control system for the identified bearings. Analyzed vibration data from accelerometer readings of the bearings, examining various component frequencies corresponding to the rollers, casing, and cage of the bearing. Developed a predictive model using LSTMs and transformer architecture to analyze accelerometer readings, predicting possible failure modes and estimating the remaining useful life of the bearings, enabling proactive maintenance alerts ahead of failure. Achieved an 83% accuracy in the time series model. Successfully incorporated the system to work in real-time, allowing for continuous training and live inference.    Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n   ","permalink":"https://kailashjagadeesh.netlify.app/experience/tata_steel_internship/","summary":"Summary  Worked on the implementation of predictive analytics for super-critical machine elements during the INSPIRE Fellowship Program at Tata Steel Ltd. Studied the operation of the belt drive systems at Tata Steels and identified the bearings in the pulleys of the belt drive as a potential point of failure. Contributed to the development of the monitoring system and lubrication control system for the identified bearings. Analyzed vibration data from accelerometer readings of the bearings, examining various component frequencies corresponding to the rollers, casing, and cage of the bearing.","title":"INSPIRE Summer Internship"},{"content":"Summary  Collaborated with Dr. Srinivasa Prasanna on an electromechanical problem statement involving the conceptualization and development of a prototype magnetic flywheel.  The magnetic flywheel concept differs from conventional flywheels by storing excess energy as magnetic flux, performing torque smoothing at the expense of magnetic energy rather than rotational kinetic energy, allowing for smaller and lighter conventional flywheels in engine manufacturing.  Contributed to the project by designing the model of a benchmarked engine and the proposed electromechanical flywheel using Fusion 360 software.  Actively participated in electromagnetic simulation and multiphysics dynamics simulation of the engine and flywheel using Ansys Maxwell and Ansys Workbench.  Conducted combined simulations using Matlab and Simulink, achieving torque smoothing for speeds up to 800rpm, validating the successful design.   Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n   ","permalink":"https://kailashjagadeesh.netlify.app/experience/iiit_bglore_internship/","summary":"Summary  Collaborated with Dr. Srinivasa Prasanna on an electromechanical problem statement involving the conceptualization and development of a prototype magnetic flywheel.  The magnetic flywheel concept differs from conventional flywheels by storing excess energy as magnetic flux, performing torque smoothing at the expense of magnetic energy rather than rotational kinetic energy, allowing for smaller and lighter conventional flywheels in engine manufacturing.  Contributed to the project by designing the model of a benchmarked engine and the proposed electromechanical flywheel using Fusion 360 software.","title":"Research Intern"},{"content":"Summary  Contributed to the development of a navigation stack for a mobile robot with future plans to scale it up for self-driving cars at Robotronics Systems Pvt Ltd. Designed URDF files and conducted simultaneous localization and mapping in a virtual map. Implemented point-to-point navigation and other functions in simulation using ROS (Robot Operating System) and Gazebo, utilizing the GMapping package to create and solve equations for navigation. Actively participated in the implementation of ROS on microcontrollers using Raspberry Pi and Jetson Nano.    Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n   ","permalink":"https://kailashjagadeesh.netlify.app/experience/robotronix_internship/","summary":"Summary  Contributed to the development of a navigation stack for a mobile robot with future plans to scale it up for self-driving cars at Robotronics Systems Pvt Ltd. Designed URDF files and conducted simultaneous localization and mapping in a virtual map. Implemented point-to-point navigation and other functions in simulation using ROS (Robot Operating System) and Gazebo, utilizing the GMapping package to create and solve equations for navigation. Actively participated in the implementation of ROS on microcontrollers using Raspberry Pi and Jetson Nano.","title":"Research Intern"},{"content":"Summary  Collaborated with Mahesh Parihar, CEO of MmM startup, and a staff affiliated with IIT Bombay on the development of machine learning techniques for predicting coronary artery disease using patient medical data. Benchmarked various machine learning techniques and evaluated their performance for coronary artery disease prediction during the internship. Contributed to feature selection and augmentation techniques, including principal component analysis. The final model achieved a remarkable 90% accuracy on the test data, demonstrating the effectiveness of the developed machine learning techniques.    Quick Links   /* styles for grid container */ .grid-container { display: grid; grid-template-columns: 60px 1fr; position: relative;  }\n.grid-item { overflow: hidden; } \n  ","permalink":"https://kailashjagadeesh.netlify.app/experience/iit_bombay_internship/","summary":"Summary  Collaborated with Mahesh Parihar, CEO of MmM startup, and a staff affiliated with IIT Bombay on the development of machine learning techniques for predicting coronary artery disease using patient medical data. Benchmarked various machine learning techniques and evaluated their performance for coronary artery disease prediction during the internship. Contributed to feature selection and augmentation techniques, including principal component analysis. The final model achieved a remarkable 90% accuracy on the test data, demonstrating the effectiveness of the developed machine learning techniques.","title":"Summer Research Intern"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/rmi/","summary":"Coming Soon ","title":"Core Research Member \u0026 Treasurer of the Club"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/mea/","summary":"Coming Soon ","title":"Technical Secretary"},{"content":"Description Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/synergy/","summary":"Description Coming Soon ","title":"Technical Head"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/cpc/","summary":"Coming Soon ","title":"CPC Team"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/nittfest/","summary":"Coming Soon ","title":"Nittfest COordinator"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/orientation/","summary":"Coming Soon ","title":"Orientation Team"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/aaveg/","summary":"Coming Soon ","title":"Aaveg Team"},{"content":"Coming Soon ","permalink":"https://kailashjagadeesh.netlify.app/clubs/sportsfete/","summary":"Coming Soon ","title":"Sportsfete Team"},{"content":"Who am I? An avid young professional with a knack for tech, a love for anime, and a serious addiction to Marvel and DC flicks. Robotics is my playground, and I'm all about diving into interdisciplinary research. As your friendly neighborhood engineer, I'm always on the lookout for new learning opportunities and collaborative ventures. Dive into my journey below for a quick glimpse of my experiences. Click on the images to unlock the full stories and detailed adventures that shaped my professional path! Technical Proficiency:  Work Experience: Check out the Experience section for more details on my work.\nEducation: My Projects:  Click on the image to view the projects in detail.\nFor more information, check out my resume.\nResume  Publications and Awards: K. Jagadeesh* and N. Hemangani, \"SR (Supernumerary Robotic) Fingers,\" 7th International Conference on Computing in Engineering \u0026 Technology (ICCET 2022), Online Conference, 2022, pp. 217-221, doi: 10.1049/icp.2022.0621. K. Jagadeesh*, \"Development and Optimization of Automotive Testing using Machine Vision,\" Intelligent Computing Systems and Applications: Proceedings of the 2nd International Conference (ICICSA 2023), Hybrid Conference, 2023, Publication Pending. Recipient of RECT 1976 scholarship, a merit-cum-need based scholarship, sponsored by alumni cell of NIT Trichy for the academic year 2020-2021. Exhibited projects representing NIT Trichy at EXCON-2022, an annual international expo for construction and heavy automotive industry at BIEC, Bangalore 2022. Secured first place in technical paper presentation events organised by the CSE, EEE, Mechanical, IT and Bio-medical departments at Invente 5.0, a national level technical symposium organised by SSN college of engineering in 2021.  Secured first place in Colloquia, a technical project presentation event by the mechanical department at IIT BHU, 2021.  Secured second place in Paper Presentation, a technical event part of Elan \u0026 nVision 2021, the annual technical fest of IIT Hyderabad. Secured second place in Project Presentation, an technical event organised as part of Apogee, an annual technical symposium by BITS Pilani, 2021. Selected as one out of 120 participants for Tata Steel’s INSPIRE Program, a nationally coveted internship program, 2021.  ","permalink":"https://kailashjagadeesh.netlify.app/about_me/","summary":"About Kailash","title":""},{"content":"Educational Timeline  Work Experience Timeline  body { font-family: Arial, sans-serif; background-color: #f4f4f4; margin: 0; padding: 0; } .timeline { position: relative; max-width: 400px; margin: 50px auto; } .timeline::before { content: ''; position: absolute; top: 0; left: 50%; width: 2px; height: 100%; background-color: #333; transform: translateX(-50%); } .timeline-item { position: relative; margin-bottom: 50px; } .timeline-item::after { content: ''; position: absolute; top: 50%; left: 50%; width: 20px; height: 20px; background-color: #333; border-radius: 50%; transform: translate(-50%, -50%); } .timeline-date { font-weight: bold; margin-bottom: 10px; } .timeline-content { margin-left: 30px; } h2 { margin-bottom: 5px; } p { margin: 0; }  \n  2020 - Present Current Job Title Job description and responsibilities go here.\n  2018 - 2020 Previous Job Title Job description and responsibilities go here.\n     ","permalink":"https://kailashjagadeesh.netlify.app/dummy/","summary":"Educational Timeline  Work Experience Timeline  body { font-family: Arial, sans-serif; background-color: #f4f4f4; margin: 0; padding: 0; } .timeline { position: relative; max-width: 400px; margin: 50px auto; } .timeline::before { content: ''; position: absolute; top: 0; left: 50%; width: 2px; height: 100%; background-color: #333; transform: translateX(-50%); } .timeline-item { position: relative; margin-bottom: 50px; } .timeline-item::after { content: ''; position: absolute; top: 50%; left: 50%; width: 20px; height: 20px; background-color: #333; border-radius: 50%; transform: translate(-50%, -50%); } .","title":""},{"content":"","permalink":"https://kailashjagadeesh.netlify.app/blog/","summary":"Coming Soon","title":"Coming Soon!!!"}]