Skip to content

Commit

Permalink
.
Browse files Browse the repository at this point in the history
  • Loading branch information
tb-dhk committed Jul 2, 2024
1 parent 001730c commit a79ef1f
Show file tree
Hide file tree
Showing 3 changed files with 40 additions and 175 deletions.
43 changes: 20 additions & 23 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,42 +23,39 @@ concurrency:
group: pages
cancel-in-progress: false


jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0 # Not needed if lastUpdated is not enabled
# - uses: pnpm/action-setup@v3 # Uncomment this if you're using pnpm
# - uses: oven-sh/setup-bun@v1 # Uncomment this if you're using Bun
fetch-depth: 0
- name: Setup Node
uses: actions/setup-node@v4
uses: actions/setup-node@v2
with:
node-version: 20
cache: npm # or pnpm / yarn
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Install dependencies
run: npm ci # or pnpm install / yarn install / bun install
- name: Build with VitePress
run: npm run docs:build # or pnpm docs:build / yarn docs:build / bun run docs:build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
node-version: '14'
- run: npm ci
- run: npm run docs:build
- uses: actions/upload-artifact@v2
with:
name: vitepress-site
path: docs/.vitepress/dist

# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
needs: build
runs-on: ubuntu-latest
name: Deploy
needs: build
steps:
- name: Checkout
uses: actions/checkout@v2
- uses: actions/download-artifact@v2
with:
name: vitepress-site
path: docs/.vitepress/dist
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/.vitepress/dist

157 changes: 12 additions & 145 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,154 +1,21 @@
# food pod ai model
# welcome to the food:pod wiki!

the food pod ai model is designed to detect food items within the food pod, facilitating efficient waste sorting and management. this readme.md provides comprehensive documentation for the data preparation, model training, and usage of the ai model.
## the food:pod project

## prerequisites
the food:pod project is a collaborative effort by students from hwa chong institution aimed at tackling food wastage through innovative technology. it comprises three main components: the app, the bin, and the ai model.

ensure you have the following installed:
- python (version 3.6 or higher)
- required python packages (you can install them using `pip install -r requirements.txt`)
- make
## components

## directory structure
- **food:pod**: a smart food waste bin utilizing image recognition to identify and weigh deposited food items, integrating with the ai model for waste analysis and tracking.
[learn more about the food:pod](bin).

the directory structure should resemble the following:
- **food:pod app**: manages user interaction, offering insights and recommendations based on data from the food bin.
[learn more about the food:pod app](app).

```
datasets/
└── data/
└── <food_category>/
└── <food_name>/
├── images/
│ ├── train/
│ ├── val/
│ └── test/
├── labels/
│ ├── train/
│ └── val/
└── boxes/
data/
└── data.yaml
```
- **food:pod image recognition model**: trained to detect and quantify food waste, using machine learning to enhance accuracy over time.
[learn more about the image recognition model](model).

## steps
## purpose

### step 1: source images
the project aims to harness technology to promote mindful consumption and reduce food wastage by providing users with real-time data and insights.

collect images for your food category and food name. ensure you have separate sets of images for training and testing. place them in directories of your choice.

### step 2: annotate training images using vgg image annotator

annotate objects in your training images using the [vgg image annotator (via)](https://www.robots.ox.ac.uk/~vgg/software/via/). export the annotations as csv files. ensure that the annotations contain accurate information about the annotated objects, including filename, object identifier, attributes, bounding box coordinates, and class id. the exported csv files should match the specified format.

### step 3: move training images

place your training images in a directory of your choice. use the following command to move the training images:

```sh
make move_train_images category=<food_category> name=<food_name> src_train_images=<path/to/train/images>
```

### step 4: move testing images

place your testing images in a directory of your choice. use the following command to move the testing images:

```sh
make move_test_images category=<food_category> name=<food_name> src_test_images=<path/to/test/images>
```

### step 5: move label files

place your label files in a directory of your choice. use the following command to move the label files:

```sh
make move_labels category=<food_category> name=<food_name> src_label_files=<path/to/label/files>
```

### step 6: convert and augment training images

convert and augment the training images using `convert.py`. use the following command:

```sh
make convert_and_augment category=<food_category> name=<food_name>
```

### step 7: split images between train and validation sets

split the images into training and validation sets using `tv.py`. use the following command:

```sh
make split_images category=<food_category> name=<food_name> ratio=<train_ratio>
```

### step 8: update `data/data.yaml`

update the `data.yaml` file to reflect the appropriate directories. use the following command:

```sh
make update_yaml category=<food_category> name=<food_name>
```

### step 9: train the model.

run `main.py`.

```sh
python3 main.py
```

## updating data.yaml

the `train_model` target in the makefile updates the `data.yaml` file with the correct paths.

```yaml
train: "<food_category>/<food_name>/images/train"
val: "<food_category>/<food_name>/images/val"
test: "<food_category>/<food_name>/images/test"
nc: 1
```
## notes
- ensure the images and labels are correctly formatted and placed in the respective directories before running the makefile.
- modify the `convert.py`, `tv.py`, and `train.py` scripts according to your specific requirements.

## food pod ai model documentation

### purpose

the food pod ai model detects food items within the food pod, aiding in waste sorting and management.

### use cases

- food waste management systems
- environmental monitoring in food establishments
- smart city initiatives for waste reduction

### dataset

trained on images of food due to the scarcity of food waste images.

### architecture

based on yolov8 architecture, optimized for real-time food item detection.

### training

- data augmentation: utilized 24 different augmentation techniques for enhanced model robustness.
- specialized training: focused on food-related images to adapt the model for food waste detection.

### inference

```python
# load the yolo model
model = yolo(model_weights_file)
# perform inference on images
results = model(image_paths)
for result in results:
result.show()
```

### note

for optimal performance, deploy the model in environments with sufficient lighting and minimal occlusion.
15 changes: 8 additions & 7 deletions wiki/docs/introduction.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
welcome to the food:pod wiki!
# welcome to the food:pod wiki!

# the food:pod project
## the food:pod project

the food:pod project is a project by students from hwa chong institution aimed at reducing food wastage through innovative technology. it consists of three main components: the app, the bin, and the ai model.
the food:pod project is a collaborative effort by students from hwa chong institution aimed at tackling food wastage through innovative technology. it comprises three main components: the app, the bin, and the ai model.

## components

- the **food:pod**: a smart food waste bin that uses image recognition to identify and weigh food items deposited, integrating with the ai model to analyze and track waste.
- **food:pod**: a smart food waste bin utilizing image recognition to identify and weigh deposited food items, integrating with the ai model for waste analysis and tracking.
[learn more about the food:pod](bin).

- the **food:pod app**: handles user interaction, providing insights and recommendations based on data from the food bin.
- **food:pod app**: manages user interaction, offering insights and recommendations based on data from the food bin.
[learn more about the food:pod app](app).

- the **food:pod image recognition model**: trained to detect and quantify food waste, leveraging machine learning to improve accuracy over time.
- **food:pod image recognition model**: trained to detect and quantify food waste, using machine learning to enhance accuracy over time.
[learn more about the image recognition model](model).

## purpose

the project aims to leverage technology to encourage mindful consumption and reduce food wastage by providing real-time data and insights to users.
the project aims to harness technology to promote mindful consumption and reduce food wastage by providing users with real-time data and insights.

0 comments on commit a79ef1f

Please sign in to comment.