Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multimodal Large Model Joint Learning Algorithm: Reproduction Based on KubeEdge-Ianvs #123

Open
CreativityH opened this issue Jul 20, 2024 · 51 comments · Fixed by #166 or #167 · May be fixed by #168
Open

Multimodal Large Model Joint Learning Algorithm: Reproduction Based on KubeEdge-Ianvs #123

CreativityH opened this issue Jul 20, 2024 · 51 comments · Fixed by #166 or #167 · May be fixed by #168
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@CreativityH
Copy link

What would you like to be added/modified:
A benchmark suite for multimodal large language models deployed at the edge using KubeEdge-Ianvs:

  1. Modify and adapt the existing edge-cloud data collection interface to meet the requirements of multimodal data collection;
  2. Implement a Multimodal Large Language Model (MLLM) benchmark suite based on Ianvs;
  3. Reproduce mainstream multimodal joint learning (training and inference) algorithms and integrate them into Ianvs single-task learning;
  4. (Advanced) Test the effectiveness of multimodal joint learning in at least one of Ianvs' advanced paradigms (lifelong learning, incremental learning, federated learning, etc.).

Why is this needed:
KubeEdge-Ianvs currently focuses on edge-cloud collaborative learning (training and inference) for a single modality of data. However, edge devices, such as those in autonomous vehicles, often capture multimodal data, including GPS, LIDAR, and Camera data. Single-modal learning can no longer meet the precise inference requirements of edge devices. Therefore, this project aims to integrate mainstream multimodal large model joint learning algorithms into KubeEdge-Ianvs edge-cloud collaborative learning, providing multimodal learning capabilities.

Recommended Skills:
TensorFlow/Pytorch, LLMs, KubeEdge-Ianvs

Useful links:
KubeEdge-Ianvs
KubeEdge-Ianvs Benchmark Test Cases
Building Edge-Cloud Synergy Simulation Environment with KubeEdge-Ianvs
Artificial Intelligence - Pretrained Models Part 2: Evaluation Metrics and Methods
Example LLMs Benchmark List
awesome-multimodal-ml
Awesome-Multimodal-Large-Language-Models

@varshith257
Copy link

@CreativityH Any recommended community channel to connect and discuss more about project

@SargamPuram
Copy link

Hi @CreativityH ,

I'm excited about the opportunity to contribute to the "Multimodal Large Model Joint Learning Algorithm" project. My background in edge computing and machine learning, particularly with TensorFlow/PyTorch, aligns well with the project's goals. Here’s my proposed approach:

Proposed Approach

  1. Multimodal Data Collection Interface:

I will modify and adapt the existing edge-cloud data collection interface to handle multimodal data, including GPS, LIDAR, and Camera inputs. This will involve creating a unified data schema and preprocessing modules for each data type to ensure compatibility and consistency.

  1. Multimodal Large Language Model (MLLM) Benchmark Suite:

I will develop a benchmark suite for multimodal LLMs based on Ianvs. This will involve identifying suitable multimodal LLMs and defining relevant performance metrics, such as accuracy, latency, and resource utilization, to evaluate their effectiveness when deployed at the edge.

  1. Multimodal Joint Learning Algorithms:

I will reproduce mainstream multimodal joint learning algorithms (training and inference) and integrate them into Ianvs’ single-task learning framework. This step will ensure the system can effectively handle the complexities of multimodal data.

  1. Advanced Testing and Optimization:

I will test the effectiveness of multimodal joint learning in at least one of Ianvs' advanced paradigms (lifelong learning, incremental learning, federated learning). I will benchmark the system to ensure performance improvements without compromising accuracy.
I will explore possible optimizations to enhance the efficiency of the edge-cloud collaborative learning setup, focusing on resource usage and latency reduction.

Looking forward to your feedback and the way forward to contributing!

Best Regards,
Sargam

@AryanNanda17
Copy link

Hello @CreativityH @MooreZheng
I am Aryan Nanda and I am currently pursuing my Bachelor in Computer Science from VJTI, Mumbai, India. I want to contribute to KubeEdge-Ianvs through this project. This project aligns with my interests and experience.

Why me?

  1. GSoC - I am also a contributor in the Google Summer of Code under a project that involves the detection of commercials and then replacing them with alternate content using deep learning. Project Link.
    Mentor feedback after mid-term evaluation:
    Screenshot 2024-08-04 at 4 49 25 PM
    The project will be completed by the end of this month.
  2. I have done a research internship in the CoE CNDS lab, VJTI where I developed LLM models for a network IDS for the detection of attacks.(repo is private)
  3. I along with my team participated in Form and Function Challenge by Mass Robotics in which I implemented the YOLO model for the detection of objects in real-time and then implemented the SLAM algorithm from readings of lIDAR sensor and Camera. Code Link

@AryanNanda17
Copy link

AryanNanda17 commented Aug 4, 2024

Please share any suggestions you have on how to start this project.
I will be sharing my findings and research here.

@CreativityH
Copy link
Author

Hi @CreativityH ,

I'm excited about the opportunity to contribute to the "Multimodal Large Model Joint Learning Algorithm" project. My background in edge computing and machine learning, particularly with TensorFlow/PyTorch, aligns well with the project's goals. Here’s my proposed approach:

Proposed Approach

  1. Multimodal Data Collection Interface:

I will modify and adapt the existing edge-cloud data collection interface to handle multimodal data, including GPS, LIDAR, and Camera inputs. This will involve creating a unified data schema and preprocessing modules for each data type to ensure compatibility and consistency.

  1. Multimodal Large Language Model (MLLM) Benchmark Suite:

I will develop a benchmark suite for multimodal LLMs based on Ianvs. This will involve identifying suitable multimodal LLMs and defining relevant performance metrics, such as accuracy, latency, and resource utilization, to evaluate their effectiveness when deployed at the edge.

  1. Multimodal Joint Learning Algorithms:

I will reproduce mainstream multimodal joint learning algorithms (training and inference) and integrate them into Ianvs’ single-task learning framework. This step will ensure the system can effectively handle the complexities of multimodal data.

  1. Advanced Testing and Optimization:

I will test the effectiveness of multimodal joint learning in at least one of Ianvs' advanced paradigms (lifelong learning, incremental learning, federated learning). I will benchmark the system to ensure performance improvements without compromising accuracy. I will explore possible optimizations to enhance the efficiency of the edge-cloud collaborative learning setup, focusing on resource usage and latency reduction.

Looking forward to your feedback and the way forward to contributing!

Best Regards, Sargam

Hi @SargamPuram, What a great proposal you did!
Further more, I'm curious to know how you would alter the data collection interface to make it possible to add new data formats without changing the content of the original data collection. Flowcharts and other ways that you can show your thinking are welcome.
Please feel free to contact me if you have any questions. Looking forward to your amazing ideas!

@CreativityH
Copy link
Author

Please share any suggestions you have on how to start this project. I will be sharing my findings and research here.

Hello @AryanNanda17,

From your introducing I see that you are an active code contributor and community member, and also that you have earned a lot of certifications, which is awesome.

I learned that you have experience in collecting radar and camera data in Evo-Borne. So, I think you could start by familiarizing yourself with Ianvs platform, find its interface for data collection, and then think about how to modify that interface in conjunction with multimodality.

Looking forward to your amazing ideas!

@staru09
Copy link

staru09 commented Aug 8, 2024

Hii
Are there any pre-tests for this project?

@aryan0931
Copy link
Contributor

aryan0931 commented Aug 8, 2024

Hello @CreativityH ,I’m Aryan Yadav, and I’m excited to contribute to the Multimodal Large Model Joint Learning Algorithm project with KubeEdge-Ianvs. I have extensive experience in ML, PyTorch, LLMs, and multimodal AI, and have worked on some very good projects related to LLMs and won goodies for that. Looking forward to collaborating on this!

Here is a potential solution:
Upgrade of the data collection interface: The approach my system collects its information from edge devices needs to be changed to accommodate several streams at any particular time, such as GPS, Lidar, or camera images. This involves making flexible setup for the collection of data in such a way that it can seamlessly process varied data types.

Creation of a Multimodal Benchmark Suite:
Next, I will set up a number of tests to see how well my system manages and integrates various types of data. This will include testing the system for its handling and making sense of combined data types.

Integrate Joint Learning Algorithms:
I will be integrating algorithms that train on mixed data types. This shall be necessary for making accurate predictions where the data inputs may be complex. I will ensure that these algorithms work well with the system's current learning methods.

Advanced Testing :
I will further test this system in more advanced learning scenarios, such as continuous or federated learning. Learning across time—in this case—that the system learns to adapt, but it will learn from data across different devices without sharing the raw data in learning.

By designing a modular and pluggable data collection system, one can integrate new data formats without modifying the existing content flow. This approach allows for flexibility and scalability in handling diverse types of data. I have tried to explain it using a simple flowchart :)

Screenshot 2024-08-09 at 12 04 15 AM

@CreativityH
Copy link
Author

Hii Are there any pre-tests for this project?

Hello, @staru09

I think three pre-tests are required to verify the risks of the idea.

  1. successfully run Ianvs on your device;
  2. figure out what kinds of multimodal data you want to collect and use;
  3. test your selected multimodal data and corresponding algorithm.

After the three steps, I guess you have the way to handle the issue out.

@CreativityH
Copy link
Author

Hello @CreativityH ,I’m Aryan Yadav, and I’m excited to contribute to the Multimodal Large Model Joint Learning Algorithm project with KubeEdge-Ianvs. I have extensive experience in ML, PyTorch, LLMs, and multimodal AI, and have worked on some very good projects related to LLMs and won goodies for that. Looking forward to collaborating on this!

Here is a potential solution: Upgrade of the data collection interface: The approach my system collects its information from edge devices needs to be changed to accommodate several streams at any particular time, such as GPS, Lidar, or camera images. This involves making flexible setup for the collection of data in such a way that it can seamlessly process varied data types.

Creation of a Multimodal Benchmark Suite: Next, I will set up a number of tests to see how well my system manages and integrates various types of data. This will include testing the system for its handling and making sense of combined data types.

Integrate Joint Learning Algorithms: I will be integrating algorithms that train on mixed data types. This shall be necessary for making accurate predictions where the data inputs may be complex. I will ensure that these algorithms work well with the system's current learning methods.

Advanced Testing : I will further test this system in more advanced learning scenarios, such as continuous or federated learning. Learning across time—in this case—that the system learns to adapt, but it will learn from data across different devices without sharing the raw data in learning.

By designing a modular and pluggable data collection system, one can integrate new data formats without modifying the existing content flow. This approach allows for flexibility and scalability in handling diverse types of data. I have tried to explain it using a simple flowchart :)

Screenshot 2024-08-09 at 12 04 15 AM

Hello @aryan0931 , nice job!

You have made a great flowchart which clearly show your idea and design. What I was wondering now is that you might combine Ianvs into your flowchart. I think you can run ianvs first on your device to know ianvs better.

Looking forward to your enhanced design!

@aryan0931
Copy link
Contributor

Hello @CreativityH ,I’m Aryan Yadav, and I’m excited to contribute to the Multimodal Large Model Joint Learning Algorithm project with KubeEdge-Ianvs. I have extensive experience in ML, PyTorch, LLMs, and multimodal AI, and have worked on some very good projects related to LLMs and won goodies for that. Looking forward to collaborating on this!
Here is a potential solution: Upgrade of the data collection interface: The approach my system collects its information from edge devices needs to be changed to accommodate several streams at any particular time, such as GPS, Lidar, or camera images. This involves making flexible setup for the collection of data in such a way that it can seamlessly process varied data types.
Creation of a Multimodal Benchmark Suite: Next, I will set up a number of tests to see how well my system manages and integrates various types of data. This will include testing the system for its handling and making sense of combined data types.
Integrate Joint Learning Algorithms: I will be integrating algorithms that train on mixed data types. This shall be necessary for making accurate predictions where the data inputs may be complex. I will ensure that these algorithms work well with the system's current learning methods.
Advanced Testing : I will further test this system in more advanced learning scenarios, such as continuous or federated learning. Learning across time—in this case—that the system learns to adapt, but it will learn from data across different devices without sharing the raw data in learning.
By designing a modular and pluggable data collection system, one can integrate new data formats without modifying the existing content flow. This approach allows for flexibility and scalability in handling diverse types of data. I have tried to explain it using a simple flowchart :)
Screenshot 2024-08-09 at 12 04 15 AM

Hello @aryan0931 , nice job!

You have made a great flowchart which clearly show your idea and design. What I was wondering now is that you might combine Ianvs into your flowchart. I think you can run ianvs first on your device to know ianvs better.

Looking forward to your enhanced design!

sure sir, @CreativityH I will update you with this in some time.

@octonawish-akcodes
Copy link

@CreativityH I am interested on working this under LFX this term, Can you locate me to relevant docs and pre tests?

@Jeeten8205
Copy link

Hello @MooreZheng @CreativityH

I'm interested in the project focused on developing a benchmark suite for Multimodal Large Language Models using KubeEdge-Ianvs. The integration of multimodal joint learning into edge-cloud collaborative systems is crucial, and I'd love to contribute. I have experience with TensorFlow/PyTorch, LLMs, and KubeEdge-Ianvs, and I'm eager to be part of this effort.

Looking forward to discussing this further!

@MooreZheng MooreZheng added the kind/feature Categorizes issue or PR as related to a new feature. label Aug 13, 2024
@CreativityH
Copy link
Author

@octonawish-akcodes
Copy link

During testing the quickstart of ianvs there were lots of dependency issues and yaml file paths irregularities, somehow I fixed them but not sure how to fix this error? Can you have a look? @CreativityH
Screenshot from 2024-08-13 19-01-25

@octonawish-akcodes
Copy link

Is this FPN_TensorFlow a custom module? because i couldn't find such module on pip :/

@aryan0931
Copy link
Contributor

aryan0931 commented Aug 13, 2024

Hello @CreativityH @MooreZheng ,Thanks for your feedback! I applied your recommendation to the flowchart and have included Ianvs. This integration is now clearly visible in the approach below
I really would like to know what you think about the redesign and if there are any other ideas you have for it to become even better.

Upgrade Data Collection Interface
 

  • Change Collection Approach
    
  • Adapt to Multiple Streams (GPS, Lidar, Camera Images)

Flexible Setup for Varied Data Types

Create Multimodal Benchmark Suite

  • Set Up Tests for Data Integration
    
  • Test Handling of Combined Data Types

Integrate Joint Learning Algorithms with Ianvs
 

  • Train on Mixed Data Types
    
  • Ensure Compatibility with Current Learning Methods using Ianvs

Advanced Testing in Ianvs Environment

  • Test in Advanced Scenarios
    
  • Continuous Learning with Ianvs

Federated Learning with Ianvs Integration
 

  • Adapt and Learn from Data Across Devices Without Sharing Raw Data

Design Modular and Pluggable System Compatible with Ianvs

Integrate New Data Formats with Ianvs
 

  • Maintain Existing Content Flow

Ensure Flexibility and Scalability using Ianvs

Screenshot 2024-08-13 at 10 13 30 PM

This integration leverages Ianvs for key components like joint learning algorithms, advanced testing, federated learning, and continuous learning. It also aligns with our focus on cloud-edge collaboration, ensuring that the system remains scalable, flexible, and ready for future challenges.

@Sid260303
Copy link

hey @CreativityH , Siddhant here, I would like to take part in the project under your guidance in the LFX Mentorship.
I am a beginner in the field of open source contribution but I have interest in LLMs and kubernetes.
I am very excited about working on this project, and also on working with the KuberEdge-ianvs framework.
Currently I am trying to learn more about kubernetes, LLMs and multimodal ml. If there are any resources you could provide that could help me with preparing for this project, it would be of much help.
Also are there any pre-requisite tasks I must complete other than setting up the ianvs on my local machine?
Thanks again for considering this request.

@AryanNanda17
Copy link

Hello @CreativityH,
I followed the instructions mentioned in the Quick Start guide. I faced the same issues as @octonawish-akcodes (a bunch of compatibility issues, it says to use python3.6.9 but when this is used, other packages give an error saying that they need a higher version of Python) and so on. The quick-start guides need an update.

Also, should I write a proposal for this project and do a PR?

@staru09
Copy link

staru09 commented Aug 13, 2024

sudo apt-get install libgl1-mesa-glx -y
this command is creating an error in the quick start guide
nvm it's not the only one, there are a lot of dependencies like
yaml, pandas, colorbar etc. that are missing.

@aryan0931
Copy link
Contributor

Hello @CreativityH , I think there are some issues with quick start guide, sudo apt-get install libgl1-mesa-glx -y this command is causing some issues, also there are some issues related to python version can you provide a quick solution for this. I am working on the issues I will update you if I get any feasible solution for the same.

@octonawish-akcodes
Copy link

octonawish-akcodes commented Aug 14, 2024

I raised a RTM PR for this yaml path inconsistencies for the pcb-aoi example in the quickstart https://ianvs.readthedocs.io/en/latest/guides/quick-start.html#step-3-ianvs-execution-and-presentation

cc @CreativityH

Here is the PR #133

@CreativityH
Copy link
Author

Is this FPN_TensorFlow a custom module? because i couldn't find such module on pip :/

I found this dependency here. Maybe try to pip install this wheel.

image

@CreativityH
Copy link
Author

Hello @CreativityH @MooreZheng ,Thanks for your feedback! I applied your recommendation to the flowchart and have included Ianvs. This integration is now clearly visible in the approach below I really would like to know what you think about the redesign and if there are any other ideas you have for it to become even better.

Upgrade Data Collection Interface
 

  • Change Collection Approach
  • Adapt to Multiple Streams (GPS, Lidar, Camera Images)

Flexible Setup for Varied Data Types

Create Multimodal Benchmark Suite

  • Set Up Tests for Data Integration
  • Test Handling of Combined Data Types

Integrate Joint Learning Algorithms with Ianvs
 

  • Train on Mixed Data Types
  • Ensure Compatibility with Current Learning Methods using Ianvs

Advanced Testing in Ianvs Environment

  • Test in Advanced Scenarios
  • Continuous Learning with Ianvs

Federated Learning with Ianvs Integration
 

  • Adapt and Learn from Data Across Devices Without Sharing Raw Data

Design Modular and Pluggable System Compatible with Ianvs

Integrate New Data Formats with Ianvs
 

  • Maintain Existing Content Flow

Ensure Flexibility and Scalability using Ianvs

Screenshot 2024-08-13 at 10 13 30 PM This integration leverages Ianvs for key components like joint learning algorithms, advanced testing, federated learning, and continuous learning. It also aligns with our focus on cloud-edge collaboration, ensuring that the system remains scalable, flexible, and ready for future challenges.

Nice done! Maybe you can list what specific multimodal learning (training/inference) algorithm you want to use. What the improvement the algorithm can achieve. Next, specify the function name of Ianvs (e.g., data collection interface) and display where your modified functions locate.

@octonawish-akcodes
Copy link

octonawish-akcodes commented Aug 14, 2024

@CreativityH Thanks for the comment it worked, also I have shared my proposal on CNCF slack can you have a look and give feedback?

@CreativityH
Copy link
Author

hey @CreativityH , Siddhant here, I would like to take part in the project under your guidance in the LFX Mentorship. I am a beginner in the field of open source contribution but I have interest in LLMs and kubernetes. I am very excited about working on this project, and also on working with the KuberEdge-ianvs framework. Currently I am trying to learn more about kubernetes, LLMs and multimodal ml. If there are any resources you could provide that could help me with preparing for this project, it would be of much help. Also are there any pre-requisite tasks I must complete other than setting up the ianvs on my local machine? Thanks again for considering this request.

Maybe the following links are usefull:

KubeEdge-Ianvs
KubeEdge-Ianvs Benchmark Test Cases
Building Edge-Cloud Synergy Simulation Environment with KubeEdge-Ianvs
Artificial Intelligence - Pretrained Models Part 2: Evaluation Metrics and Methods
Example LLMs Benchmark List
awesome-multimodal-ml
Awesome-Multimodal-Large-Language-Models

Maybe you can display your idea via a flowchart like @aryan0931 .

@CreativityH
Copy link
Author

this project and do a PR?

@MooreZheng There are some common installation issues encountered by @AryanNanda17 @staru09 .

@CreativityH
Copy link
Author

@CreativityH Thanks for the comment it worked, also I have shared my proposal on CNCF slack can you have a look and give feedback?

Sure, it is my honor.

@AryanNanda17
Copy link

AryanNanda17 commented Aug 14, 2024

Hello @CreativityH,
Why does it show that the application is closed?
Screenshot 2024-08-14 at 6 04 07 PM
According to this, one more day is pending:-
Screenshot 2024-08-14 at 6 03 25 PM
I had in mind, that the last day is the 15th and I have to fill out the application before that day. But, why is it closed?

@octonawish-akcodes
Copy link

@AryanNanda17 Youre looking at wrong document, It clearly states the 2023 mentorship session, correct link is this https://github.com/cncf/mentoring/blob/main/programs/lfx-mentorship/2024/03-Sep-Nov/README.md Which has the 2024 timelines.

@AryanNanda17
Copy link

ugh, looks like I missed it. :(

@aryan0931
Copy link
Contributor

Hello @CreativityH @MooreZheng ,Thanks for your feedback! I applied your recommendation to the flowchart and have included Ianvs. This integration is now clearly visible in the approach below I really would like to know what you think about the redesign and if there are any other ideas you have for it to become even better.
Upgrade Data Collection Interface
>  

  • Change Collection Approach
  • Adapt to Multiple Streams (GPS, Lidar, Camera Images)

Flexible Setup for Varied Data Types
Create Multimodal Benchmark Suite
>

  • Set Up Tests for Data Integration
  • Test Handling of Combined Data Types

Integrate Joint Learning Algorithms with Ianvs
>  

  • Train on Mixed Data Types
  • Ensure Compatibility with Current Learning Methods using Ianvs

Advanced Testing in Ianvs Environment
>

  • Test in Advanced Scenarios
  • Continuous Learning with Ianvs

Federated Learning with Ianvs Integration
>  

  • Adapt and Learn from Data Across Devices Without Sharing Raw Data

Design Modular and Pluggable System Compatible with Ianvs
Integrate New Data Formats with Ianvs
>  

  • Maintain Existing Content Flow

Ensure Flexibility and Scalability using Ianvs
Screenshot 2024-08-13 at 10 13 30 PM
This integration leverages Ianvs for key components like joint learning algorithms, advanced testing, federated learning, and continuous learning. It also aligns with our focus on cloud-edge collaboration, ensuring that the system remains scalable, flexible, and ready for future challenges.

Nice done! Maybe you can list what specific multimodal learning (training/inference) algorithm you want to use. What the improvement the algorithm can achieve. Next, specify the function name of Ianvs (e.g., data collection interface) and display where your modified functions locate.

sure @CreativityH , I am working on it will update you with this in some time.

@staru09
Copy link

staru09 commented Aug 15, 2024

Was anyone able to complete the setup and run the quick start?

@aryan0931
Copy link
Contributor

Hello @CreativityH @MooreZheng , I have mentioned some algorithms which can be relevant to this project and also mentioned improvements,

Integrating Multimodal Learning with Ianvs
1. Algorithms of Multimodal Learning
Multimodal Transformer Networks (MTN): It improves accuracy and focuses on key features across GPS, Lidar, and Camera Images.
Deep Canonical Correlation Analysis (DCCA): The methods improve feature extraction and reduce dimensionality.
Multimodal Variational Autoencoder (MVAE): It ensures robust representation and better generalization.

There are some more algorithms which will be helpful in this project, I am going through them and learing about them.

2. Key Improvements
Learning Efficiency: Optimized learning from combined data types.
Flexibility: Adapts to new data formats, improving scalability.
Robust Inference: Handles noisy data for reliable results.

3. Ianvs Functions
Data Collection Interface
Data Integration & Testing
Joint Learning Algorithms
Federated Learning Integration
Continuous Learning

I am presently dealing with the detailed identification of places where all these functions need to be implemented. In itself, this exercise is enriching my understanding of the repository, which is an important requirement to have these modifications integrated seamlessly.I will update information about modified functions location in some time.

I’ve also specified some of these functions in relation to the overall project objectives in my proposal also.

Please do suggest or ask for more information if needed.

@aryan0931
Copy link
Contributor

Hello @CreativityH , I think there are some issues with quick start guide, sudo apt-get install libgl1-mesa-glx -y this command is causing some issues, also there are some issues related to python version can you provide a quick solution for this. I am working on the issues I will update you if I get any feasible solution for the same.

If others are encountering issues while installing Ianvs, it may be due to outdated dependencies that haven’t been updated to match the latest versions. Some APIs from older versions, like Sedna, have been deprecated, which can lead to confusion and installation errors.

To resolve these issues, it’s recommended to update the outdated packages and upgrade the interface to the latest versions. Aligning the dependencies and interface with the new versions will not only help Ianvs overcome these complex dependency challenges but also enhance overall functionality.

@aryan0931
Copy link
Contributor

Hi @CreativityH , @MooreZheng
Over the past few weeks, I have extensively researched and explored several critical areas related to enhancing KubeEdge-Ianvs with multimodal learning capabilities. My research activities included:

  • KubeEdge-Ianvs Documentation: Reviewed the current documentation and capabilities of KubeEdge-Ianvs, understanding its focus on edge-cloud collaborative learning and its current single-modal data handling.

  • Benchmark Test Cases: Analyzed existing benchmark test cases to identify the metrics and methods needed for evaluating multimodal models effectively. This involved studying various benchmarks to ensure that the new multimodal benchmarks will be comprehensive and relevant.

  • Pretrained Models Evaluation Metrics and Methods: Delved into evaluation metrics and methods for pretrained models, understanding how to assess the performance of large language models and other pretrained systems. This research is crucial for setting up effective benchmarks and evaluation strategies.

  • Awesome Multimodal Machine Learning Repositories: Explored cutting-edge multimodal learning algorithms and integration techniques through resources such as the “Awesome Multimodal Machine Learning” and “Awesome-Multimodal-Large-Language-Models” repositories. This research provided insights into state-of-the-art approaches and how they can be applied to enhance KubeEdge-Ianvs.

Summary:

The research has provided a deep understanding of the current limitations of KubeEdge-Ianvs in handling multimodal data and has highlighted the need for integrating advanced multimodal learning algorithms. The next steps will involve adapting the edge-cloud data collection interface and implementing a comprehensive benchmark suite to evaluate and improve multimodal learning capabilities within Ianvs.

I look forward to applying these insights to advance the project and contribute to its development. If anyone has additional resources or insights related to this area, I would greatly appreciate your input!

@CreativityH
Copy link
Author

Hello @CreativityH , I think there are some issues with quick start guide, sudo apt-get install libgl1-mesa-glx -y this command is causing some issues, also there are some issues related to python version can you provide a quick solution for this. I am working on the issues I will update you if I get any feasible solution for the same.

If others are encountering issues while installing Ianvs, it may be due to outdated dependencies that haven’t been updated to match the latest versions. Some APIs from older versions, like Sedna, have been deprecated, which can lead to confusion and installation errors.

To resolve these issues, it’s recommended to update the outdated packages and upgrade the interface to the latest versions. Aligning the dependencies and interface with the new versions will not only help Ianvs overcome these complex dependency challenges but also enhance overall functionality.

@MooreZheng @hsj576 These suggestions sound great. Is there any mismatch between Ianvs version and Sedna version?

@CreativityH
Copy link
Author

Hi @CreativityH , @MooreZheng Over the past few weeks, I have extensively researched and explored several critical areas related to enhancing KubeEdge-Ianvs with multimodal learning capabilities. My research activities included:

  • KubeEdge-Ianvs Documentation: Reviewed the current documentation and capabilities of KubeEdge-Ianvs, understanding its focus on edge-cloud collaborative learning and its current single-modal data handling.
  • Benchmark Test Cases: Analyzed existing benchmark test cases to identify the metrics and methods needed for evaluating multimodal models effectively. This involved studying various benchmarks to ensure that the new multimodal benchmarks will be comprehensive and relevant.
  • Pretrained Models Evaluation Metrics and Methods: Delved into evaluation metrics and methods for pretrained models, understanding how to assess the performance of large language models and other pretrained systems. This research is crucial for setting up effective benchmarks and evaluation strategies.
  • Awesome Multimodal Machine Learning Repositories: Explored cutting-edge multimodal learning algorithms and integration techniques through resources such as the “Awesome Multimodal Machine Learning” and “Awesome-Multimodal-Large-Language-Models” repositories. This research provided insights into state-of-the-art approaches and how they can be applied to enhance KubeEdge-Ianvs.

Summary:

The research has provided a deep understanding of the current limitations of KubeEdge-Ianvs in handling multimodal data and has highlighted the need for integrating advanced multimodal learning algorithms. The next steps will involve adapting the edge-cloud data collection interface and implementing a comprehensive benchmark suite to evaluate and improve multimodal learning capabilities within Ianvs.

I look forward to applying these insights to advance the project and contribute to its development. If anyone has additional resources or insights related to this area, I would greatly appreciate your input!

@aryan0931 I think in the next step, you may pay attention to the location of the data collection function in the original source code. Tell me where these codes are and attach the figures of the code here. Also, for multidata collection, you may list the pseudo code first.

@octonawish-akcodes
Copy link

@CreativityH I still didnt got any feedback on my proposal :/

@aryan0931
Copy link
Contributor

Hi @CreativityH , @MooreZheng Over the past few weeks, I have extensively researched and explored several critical areas related to enhancing KubeEdge-Ianvs with multimodal learning capabilities. My research activities included:

  • KubeEdge-Ianvs Documentation: Reviewed the current documentation and capabilities of KubeEdge-Ianvs, understanding its focus on edge-cloud collaborative learning and its current single-modal data handling.
  • Benchmark Test Cases: Analyzed existing benchmark test cases to identify the metrics and methods needed for evaluating multimodal models effectively. This involved studying various benchmarks to ensure that the new multimodal benchmarks will be comprehensive and relevant.
  • Pretrained Models Evaluation Metrics and Methods: Delved into evaluation metrics and methods for pretrained models, understanding how to assess the performance of large language models and other pretrained systems. This research is crucial for setting up effective benchmarks and evaluation strategies.
  • Awesome Multimodal Machine Learning Repositories: Explored cutting-edge multimodal learning algorithms and integration techniques through resources such as the “Awesome Multimodal Machine Learning” and “Awesome-Multimodal-Large-Language-Models” repositories. This research provided insights into state-of-the-art approaches and how they can be applied to enhance KubeEdge-Ianvs.

Summary:

The research has provided a deep understanding of the current limitations of KubeEdge-Ianvs in handling multimodal data and has highlighted the need for integrating advanced multimodal learning algorithms. The next steps will involve adapting the edge-cloud data collection interface and implementing a comprehensive benchmark suite to evaluate and improve multimodal learning capabilities within Ianvs.
I look forward to applying these insights to advance the project and contribute to its development. If anyone has additional resources or insights related to this area, I would greatly appreciate your input!

@aryan0931 I think in the next step, you may pay attention to the location of the data collection function in the original source code. Tell me where these codes are and attach the figures of the code here. Also, for multidata collection, you may list the pseudo code first.

@CreativityH , sure I will do it and update you in some time.

@CreativityH
Copy link
Author

@CreativityH I still didnt got any feedback on my proposal :/

Just give me more time 💪🏻

@CreativityH
Copy link
Author

@CreativityH I still didnt got any feedback on my proposal :/

Hey @octonawish-akcodes , I cannot find your proposal. Please tell me where you send or send the proposal again to my email [email protected].

@octonawish-akcodes
Copy link

@CreativityH Just sent you on mail, its also available on LFX platform of mentorship application I submitted.

@aryan0931
Copy link
Contributor

Hello @CreativityH , I have worked on Pseudo-Code for Multimodal Data Collection Interface,
here is the pseudo-code,

 # Pseudo-Code: Multimodal Data Collection Interface for KubeEdge-Ianvs

 class MultimodalDataCollector:
    def __init__(self):
        # Initialize data streams for different modalities
        self.gps_stream = GPSStream()
        self.lidar_stream = LidarStream()
        self.camera_stream = CameraStream()

        # Define a dictionary to store synchronized data
        self.synchronized_data = {
            'timestamp': None,
            'gps_data': None,
            'lidar_data': None,
            'camera_data': None
        }

    def collect_data(self):
        # Collect data from each modality
        gps_data = self.gps_stream.read_data()
        lidar_data = self.lidar_stream.read_data()
        camera_data = self.camera_stream.capture_image()

        # Synchronize data based on timestamps
        timestamp = self._synchronize_data(gps_data, lidar_data, camera_data)

        # Store synchronized data
        self.synchronized_data['timestamp'] = timestamp
        self.synchronized_data['gps_data'] = gps_data
        self.synchronized_data['lidar_data'] = lidar_data
        self.synchronized_data['camera_data'] = camera_data

        # Return synchronized multimodal data
        return self.synchronized_data

    def _synchronize_data(self, gps_data, lidar_data, camera_data):
        # Assume each data stream has a timestamp attribute
        # Find the closest matching timestamp across all modalities
        timestamps = [gps_data.timestamp, lidar_data.timestamp, camera_data.timestamp]
        synchronized_timestamp = min(timestamps, key=lambda t: abs(t - mean(timestamps)))

        return synchronized_timestamp

    def send_data_to_edge(self, data):
        # Send collected data to the edge node for processing
        EdgeNode.send(data)

    def run(self):
        while True:
            # Collect and send synchronized multimodal data continuously
            synchronized_data = self.collect_data()
            self.send_data_to_edge(synchronized_data)
            sleep(1)  # Delay for next collection cycle


 if __name__ == "__main__":
     collector = MultimodalDataCollector()
     collector.run()

Key Points
Initialization: The MultimodalDataCollector class sets up data streams for GPS, LiDAR, and Camera inputs, representing different modalities.
Data Collection: The collect_data() method retrieves and synchronizes data from each stream based on timestamps.
Synchronization: The _synchronize_data() method aligns data from different modalities by matching timestamps.
Edge Node Transmission: The send_data_to_edge() method transmits synchronized multimodal data to the edge node for processing.
Continuous Collection: The run() method continuously collects and transmits data in real-time.

Next Steps:
Implementation: Translate the pseudo-code into a working implementation within the KubeEdge-Ianvs framework.
Benchmark Development: Create a multimodal benchmark suite with evaluation metrics.
Integration: Apply advanced multimodal learning algorithms to optimize KubeEdge-Ianvs performance.

For the location of the data collection function in the original source code part, I am working on it and will update you in some time.

@CreativityH
Copy link
Author

@CreativityH Just sent you on mail, its also available on LFX platform of mentorship application I submitted.

@octonawish-akcodes I read your Proposal. it is a great job. However, I think you need to elaborate your proposed approach in conjunction with the diagram. Add or modify your design in conjunction with the Ianvs architecture diagram. And, I think you should list the names of the Ianvs functions that will be modified, and clarify where you expect the added modules to be located. And, you need to state whether your changes will cause changes to other existing methods. Anyway, please add the architecture diagram first.

@aryan0931
Copy link
Contributor

aryan0931 commented Aug 27, 2024

hello @CreativityH , I have spended some time for location of the data collection function in the original source code and here is the screenshot of some functions which are loading data from various paths or URL ,

Screenshot 2024-08-27 at 3 44 08 PM Screenshot 2024-08-27 at 3 42 37 PM

Also the code here link does handle data but is primarily focused on managing the lifecycle of machine learning models rather than just collecting data. It involves processes such as splitting datasets, training models incrementally, and evaluating them.Overall, the LifelongLearning class provides a framework for models to continuously learn and adapt as new data becomes available, enhancing their performance over time.I am still working on to get location of data collection functions.
I am also working on some more files like implementation of functions, I will update you with them once finished.

@CreativityH
Copy link
Author

Thanks @CreativityH For selecting me as the LFX mentee for this term, really excited to tackle this task!!!

Just wanted to know where should I reach out to you for further plans to work on this project idea, I couldn't find you out on the kubeedge slack channel. Also which community meeting do I need to join, Although I added the calender but not sure which one to attend. Can you guide me with further instructions?

Hi @octonawish-akcodes , congratulations! The KubeEdge SIG AI community meeting link is here. For further plans to work, please contact my graduate student [email protected]. He would assist you to plan.

@octonawish-akcodes
Copy link

Thanks @CreativityH For selecting me as the LFX mentee for this term, really excited to tackle this task!!!
Just wanted to know where should I reach out to you for further plans to work on this project idea, I couldn't find you out on the kubeedge slack channel. Also which community meeting do I need to join, Although I added the calender but not sure which one to attend. Can you guide me with further instructions?

Hi @octonawish-akcodes , congratulations! The KubeEdge SIG AI community meeting link is here. For further plans to work, please contact my graduate student [email protected]. He would assist you to plan.

I sent the mail :))

@aryan0931
Copy link
Contributor

Thanks, @CreativityH, for selecting me as an LFX mentee for this term! I'm really excited to tackle this task and contribute to the project.

Just wanted to check where I should reach out to you to discuss the next steps and further plans for working on this project idea.

Looking forward to collaborating! 🙌

@CreativityH
Copy link
Author

CreativityH commented Sep 5, 2024 via email

@aryan0931
Copy link
Contributor

Dear Aryan Yadav, My student Tianyu Tu will contact you for follow-up arrangements. He has a lot of experience in this research direction, and you can communicate more. We can discuss at any time. Bests, Chuang. Aryan Yadav @.> 于2024年9月6日周五 00:57写道:

Thanks, @CreativityH https://github.com/CreativityH, for selecting me as an LFX mentee for this term! I'm really excited to tackle this task and contribute to the project. Just wanted to check where I should reach out to you to discuss the next steps and further plans for working on this project idea. Looking forward to collaborating! 🙌 — Reply to this email directly, view it on GitHub <#123 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADURL3UKZMX2XJMOEBC2GCTZVCEPHAVCNFSM6AAAAABLF4MFJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMZSGIZDANZRGI . You are receiving this because you were mentioned.Message ID: @.
>

Okay, sir, I received the mail from him.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment