-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1 from helijarv/main
Tactics from 'Green tactics for ML-important QAs'
- Loading branch information
Showing
72 changed files
with
739 additions
and
0 deletions.
There are no files selected for viewing
22 changes: 22 additions & 0 deletions
22
docs/_posts/algorithm-design/2023-08-01choose-an-energy-efficient-algorithm.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Choose an energy efficient algorithm" | ||
tags: machine-learning algorithms design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: algorithm-design | ||
t-description: "Different ML algorithms have different levels of energy consumption. For example, K-nearest neighbor algorithm has way higher energy consumption than Random Forest (Kaack et al., 2022).High energy consumption does not necessarily mean that those algorithms perform better or achieve higher accuracy levels than low-energy algorithms. Thus, choosing a suitable, energy efficient algorithms that achieve wanted outcomes can reduce the energy consumption of ML models (Kaack et al., 2022)" | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: "Inference" | ||
t-intent: "Choose an energy efficient algorithm that can achieve wanted model outcomes" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: "Appropriateness" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Kaack, L. H., Donti, P. L., Strubell, E., Kamiya, G., Creutzig, F., & Rolnick, D. (2022). Aligning artificial intelligence with climate change mitigation. Nature Climate Change, 12(6), 518-527." | ||
t-source-doi: "DOI: 10.1038/s41558-022-01377-7" | ||
t-diagram: "choose-an-energy-efficient-algorithm.png" | ||
--- |
26 changes: 26 additions & 0 deletions
26
...rithm-design/2023-08-01consider-reinforcement-learning-for-energy-efficiency.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Consider reinforcement learning for energy-efficiency" | ||
tags: machine-learning algorithms design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: algorithm-design | ||
t-description: "Algorithms can be designed to optimize energy efficiency through reinforcement learning. It can be designed to select the most energy efficient execution target. Also other quality attributes, such as security, privacy (UbiPriSEQ) can be addressed simultaneously" | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: "Reinforcement Learning" | ||
t-intent: "Using reinforcement learning algorithms to optimize the energy efficiency (or other QAs) of systems" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: "Inference" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Kim, Y. G., & Wu, C. J. (2020, October). Autoscale: Energy efficiency optimization for stochastic edge inference using reinforcement learning. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) (pp. 1082-1096). IEEE.; | ||
Mohammed, T., Albeshri, A., Katib, I., & Mehmood, R. (2020). UbiPriSEQ—Deep reinforcement learning to manage privacy, security, energy, and QoS in 5G IoT hetnets. Applied Sciences, 10(20), 7120." | ||
" | ||
t-source-doi: "DOI:10.1109/MICRO50266.2020.00090 ; | ||
doi: 10.3390/app10207120 " | ||
t-diagram: "consider-reinforcement-learning-for-energy-efficiency.png" | ||
--- |
28 changes: 28 additions & 0 deletions
28
docs/_posts/algorithm-design/2023-08-01decrease-model-complexity.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Decrease model complexity" | ||
tags: machine-learning algorithms design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: algorithm-design | ||
t-description: "Complex AI models have shown to have high energy consumption and therefore scaling down the model complexity can contribute to environmental sustainability. For example, using simple three-layered Convolutional Neural Network architecture to learn post-processing tasks of CT-scans (Morotti et al), using shallower Decision trees (Abreu et al 2020). " | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: "Inference" | ||
t-intent: "Decreasing the model complexity makes ML algorithms simpler without sacrificing too much accuracy. These simplified models require less computing power which makes them more energy-efficient." | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Morotti, E., Evangelista, D., & Loli Piccolomini, E. (2021). A green prospective for learned post-processing in sparse-view tomographic reconstruction. Journal of Imaging, 7(8), 139. | ||
Abreu, B. A., Grellert, M., & Bampi, S. (2020, October). Vlsi design of tree-based inference for low-power learning applications. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 1-5). IEEE." | ||
|
||
t-source-doi: "DOI:10.3390/jimaging7080139; | ||
DOI:10.1109/ISCAS45731.2020.9180704 | ||
" | ||
t-diagram: "decrease-model-complexity.png" | ||
--- |
25 changes: 25 additions & 0 deletions
25
docs/_posts/algorithm-design/2023-08-01design-dynamic-parameter-adaptation.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Design dynamic parameter adaptation" | ||
tags: machine-learning algorithms design-tactic energy-footprint measured | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: algorithm-design | ||
t-description: "Dynamic parameter adaptation means that the hyperparameters of a ML model are dynamically adapted based on the input data, instead of determining the exact parameters values in the algorithm. For example, García-Martín et al used an nmin adaptation method for very fast decision trees. The nmin method allows the algorithm to grow faster in those branches where there is more confidence in creating a split and delaying the split on the less confident branches. This method resulted in decreased energy consumtpion." | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: "Inference" | ||
t-intent: "Design parameters that are dynamically adapted based on the input data" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: "Accuracy" | ||
t-measuredimpact: "Using nmin method in very fast decision trees resulted in lower energy consumption in 22 out of 29 of the tested datasets, with an average of 7% decrease in energy footprint. Additionally, nmin showed higher accuracy for 55% of the datasets, with an average difference of less than 1%." | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
"Kim, Y. G., & Wu, C. J. (2020, October). Autoscale: Energy efficiency optimization for stochastic edge inference using reinforcement learning. In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) (pp. 1082-1096). IEEE. | ||
|
||
Mohammed, T., Albeshri, A., Katib, I., & Mehmood, R. (2020). UbiPriSEQ—Deep reinforcement learning to manage privacy, security, energy, and QoS in 5G IoT hetnets. Applied Sciences, 10(20), 7120." | ||
" | ||
t-source-doi: "DOI:10.1007/s41060-021-00246-4 " | ||
t-diagram: "design-dynamic-parameter-adaptation.png" | ||
--- |
22 changes: 22 additions & 0 deletions
22
..._posts/algorithm-design/2023-08-01select-a-lightweight-algorithm-alternative.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Select a lightweight algorithm alternative" | ||
tags: machine-learning algorithms design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: algorithm-design | ||
t-description: "Some algorithms may have light-weight alternatives. Using these lighter models can have a lower impact on the environment without a loss of important quality attributes. For example Sorbaro et a (2020), noted that Spiking neural networks is an altretnative for convolutional neural networks. CNN can be converded to SNN without a significant loss of accuracy or performance" | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: "Inference" | ||
t-intent: "If possible, choose lighter alternatives of existing algorithms" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: "Accuracy, Performance" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Sorbaro, M., Liu, Q., Bortone, M., & Sheik, S. (2020). Optimizing the energy consumption of spiking neural networks for neuromorphic applications. Frontiers in neuroscience, 14, 662." | ||
t-source-doi: "doi: 10.1016/S0925-2312(01)00658-0" | ||
t-diagram: "select-a-lightweight-algorithm-alternative.png" | ||
--- |
22 changes: 22 additions & 0 deletions
22
docs/_posts/algorithm-design/2023-08-01use-built-in-library-functions.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Use built-in library functions" | ||
tags: machine-learning algorithm-design design-tactic libraries | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: "algorithm-design" | ||
t-description: "Apply built-in library functions in the machine learning model instead of writing custom implementations. The existing built-in library functions are usually optimized and well-tested, which is why they may have improved performance and energy efficiency compared to custom-made functions. These built-in libraries can be used for instance for tensor operations " | ||
t-participant: "Data Scientist" | ||
t-artifact: "Algorithm" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Use built-in libraried for ML models if possible." | ||
t-targetQA: "Performance" | ||
t-relatedQA: "Energy efficiency" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); ." | ||
|
||
t-source-doi: "DOI:10.1145/3530019.3530035" | ||
t-diagram: "use-built-in-library-functions.png" | ||
--- |
23 changes: 23 additions & 0 deletions
23
docs/_posts/data-centric/2023-08-01apply_sampling_techniques.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Apply sampling techniques" | ||
tags: data-processing machine-learning design-tactic measured energy-footprint | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: data-centric | ||
t-description: "The size of input data seems to have a positive correlation with the energy consumption of computing. Therefore reducing the size of input data can have a positive impact on energy-efficiency of ML. Reducing input data can be done by using only a subset of the original input data. This is called sampling. There are some different ways of conducting sampling (e.g. Simple random sampling or Systematic sampling), but Verdecchia et al. (2022) used stratified sampling which means randomly selecting datapoints from homogeneous subgroups of the original dataset (2022)." | ||
t-participant: "Data Scientist" | ||
t-artifact: "Data" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Using a subset of the original input data for training and inference" | ||
t-targetQA: "Energy Efficiency" | ||
t-relatedQA: "Accuracy, data representativeness" | ||
t-measuredimpact: "Sampling can lead to savings in energy consumption. Verdecchia et al (2022) achieved decrease in energy consumption of up to 92% " | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs ' by Heli Järvenpää (2023), | ||
Verdecchia, R., Cruz, L., Sallou, J., Lin, M., Wickenden, J., & Hotellier, E. (2022, June). Data-centric green ai an exploratory empirical study. In 2022 International Conference on ICT for Sustainability (ICT4S) (pp. 35-45). IEEE." | ||
t-source-doi: "DOI: 10.1109/ICT4S55073.2022.00015" | ||
t-diagram: "apply-sampling-techniques.png" | ||
--- |
21 changes: 21 additions & 0 deletions
21
...posts/data-centric/2023-08-01project_data_into_a_lower-dimensional_embedding.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Project data into a lower-dimensional embedding" | ||
tags: data-processing machine-learning design-tactic measured energy-footprint | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: data-centric | ||
t-description: "Data projection means transforming data into a lower-dimensional embedding and using data to optimize the projection parameters. Reducing the dimensionality of input data shrinks the dimensionality of the overall DNN, which leads to improved performance" | ||
t-participant: "Data Scientist" | ||
t-artifact: "Data" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Project data into lower-dimensional embedding" | ||
t-targetQA: "Performance" | ||
t-relatedQA: "Accuracy" | ||
t-measuredimpact: | ||
t-diagram: "data-projection.png" | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023)" | ||
t-source-doi: | ||
--- |
22 changes: 22 additions & 0 deletions
22
docs/_posts/data-centric/2023-08-01reduce_number_of_data_features.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Reduce the number of data features" | ||
tags: data-processing machine-learning design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: data-centric | ||
t-description: "A huge number of data features can lead to a high computing power in training and inference. Reducing these data features can lead to improved performance while still maintaining accuracy. Reducing the number of input features can be done with selecting only a subset of all the available data features." | ||
t-participant: "Data Scientist" | ||
t-artifact: "Data" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Reducing the number of data features by choosing only a subset of all the available features" | ||
t-targetQA: "Energy Efficiency" | ||
t-relatedQA: "Accuracy, Data representativeness" | ||
t-measuredimpact: "Number of input features can result in a reduction of energy consumption while still maintaining accuracy." | ||
t-source: | ||
"Master Thesis 'Green tactics for ML-important QAs ' by Heli Järvenpää (2023)" | ||
t-source-doi: | ||
T-diagram: "reduce-number-of-data-features.png" | ||
--- |
24 changes: 24 additions & 0 deletions
24
docs/_posts/data-centric/2023-08-01remove_redundant_data.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Remove redundant data" | ||
tags: data-processing machine-learning design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: data-centric | ||
t-description: "Identifying and removing redundant data for ML models reduces computing time, number of computing, energy consumption and memory space. Redundant data refers to those datapoints that don’t improve the accuracy of the model. Thus, removing these unimportant datapoints doesn’t sacrifice much accuracy (Dhabe et al. 2021)" | ||
t-participant: "Data Scientist" | ||
t-artifact: "Data" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Detecting and removing redundant data reduces the size of input data, which can result in less computation power" | ||
t-targetQA: "Energy Efficiency" | ||
t-relatedQA: "Accuracy, data representativeness" | ||
t-measuredimpact: "Removing redundant data from the dataset reduces leads to a smaller input data that further decreases computations, computatuinal time, electricity and memory space" | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs ' by Heli Järvenpää (2023); | ||
Dhabe, P., Mirani, P., Chugwani, R., & Gandewar, S. (2021). Data Set Reduction to Improve Computing Efficiency and Energy Consumption in Healthcare Domain. In Digital Literacy and Socio-Cultural Acceptance of ICT in Developing Countries (pp. 53-64). Cham: Springer International Publishing" | ||
|
||
T-diagram: "remove-redundant-data.png" | ||
t-source-doi: "DOI:10.1007/978-3-030-61089-0_4" | ||
--- |
22 changes: 22 additions & 0 deletions
22
docs/_posts/data-centric/2023-08-01use_input_quantization.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Use input quantization" | ||
tags: data-processing machine-learning design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: data-centric | ||
t-description: "In Machine learning, input quantization refers to the process of converting data to a smaller precision (e.g. reduce the bits of data). For example, Abreu et al (2022) investigated different input widths (bits) and found out that 10-bit is enough for accuracy and increasing the number of bits doesn’t contribute to accuracy. Therefore that is only a waste of resources. Additionally, it may have a positive impact in accuracy, since exact data precision may lead to overfitting of a machine learning model. " | ||
t-participant: "Data Scientist" | ||
t-artifact: "Data" | ||
t-context: "Machine Learning" | ||
t-feature: | ||
t-intent: "Reduce the data precision with input quantization" | ||
t-targetQA: "Accuracy" | ||
t-relatedQA: "Energy-efficiency" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs ' by Heli Järvenpää (2023), | ||
Abreu, B., Grellert, M., & Bampi, S. (2022). A framework for designing power-efficient inference accelerators in tree-based learning applications. Engineering Applications of Artificial Intelligence, 109, 104638." | ||
t-source-doi: "DOI:10.1016/j.engappai.2021.104638" | ||
t-diagram: "use-input-quantization.png" | ||
--- |
23 changes: 23 additions & 0 deletions
23
docs/_posts/deployment/2023-08-01-apply-cloud-fog-network.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Apply Cloud Fog Network" | ||
tags: machine-learning deployment architecture measured energy-footprint | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: deployment | ||
t-description: "Instead of using distant cloud data centers centers, there are ways to bring the cloud closer to the edge devices. A cloud fog network (CFN) can be used for more energy-efficient processes. CFN supports an architecture where deep neural network models are processed in servers between end-devices and clouds. Yosuf et al (2018) present a architecture that consists of four layers: IoT end devices, Access Fog (AF), Metro Fog (MF) and Cloud datacenter (CDC)" | ||
t-participant: "Software Designer" | ||
t-artifact: "Algorithm - deep neural network" | ||
t-context: "Network" | ||
t-feature: | ||
t-intent: "Apply Cloud Fog Network" | ||
t-targetQA: "Performance" | ||
t-relatedQA:"Energy efficiency" | ||
t-measuredimpact: "On average, the use of Cloud fog network (CFN) architecture led to a 68% reduction in power consumption when compared to a traditional cloud data center architecture." | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Yosuf, B. A., Mohamed, S. H., Alenazi, M. M., El-Gorashi, T. E., & Elmirghani, J. M. (2021, June). Energy-Efficient AI over a Virtualized Cloud Fog Network. In Proceedings of the Twelfth ACM International Conference on Future Energy Systems (pp. 328-334)." | ||
t-source-doi:"DOI:10.1145/3447555.3465378" | ||
t-diagram: "apply-cloud-fog-network.png" | ||
|
||
--- |
22 changes: 22 additions & 0 deletions
22
docs/_posts/deployment/2023-08-01-avoid-unnecessary-referencing-to-data.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Avoid unnecessary referencing to data" | ||
tags: machine-learning deployment design-tactic | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: deployment | ||
t-description: "Machine learning models require reading and writing enormous amounts of data in the ML workflow. Reading data means retrieving information from storage, while writing data means storing or updating the information. These operations may increase unnecessary data movements and memory usage, which influence the energy consumption of computing. To avoid non-essential referencing of data, reading and writing operations must be designed carefully " | ||
t-participant: "Software Designer" | ||
t-artifact: | ||
t-context: | ||
t-feature: | ||
t-intent: "Avoid unnecessary reading and writing operations of data" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: "Resource utilization" | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023); | ||
Shanbhag, S., Chimalakonda, S., Sharma, V. S., & Kaulgud, V. (2022, June). Towards a Catalog of Energy Patterns in Deep Learning Development. In Proceedings of the International Conference on Evaluation and Assessment in Software Engineering 2022 (pp. 150-159)." | ||
t-source-doi: "DOI:10.1145/3530019.3530035" | ||
t-diagram: "avoid-unnecessary-referencing-to-data.png" | ||
--- |
21 changes: 21 additions & 0 deletions
21
docs/_posts/deployment/2023-08-01-consider-federated-learning.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
layout: tactic | ||
|
||
title: "Consider federated learning" | ||
tags: machine-learning deployment | ||
t-sort: "Awesome Tactic" | ||
t-type: "Architectural Tactic" | ||
categories: deployment | ||
t-description: "Federated learning (FL) is a machine learning approach that aims to train a shared ML model on decentralized devices. Instead of sending raw data to a central server, FL trains the model directly on the devices where the data is generated, such as mobile phones or edge devices. Only the trained data or updated model parameters are then sent to a central server. Federated learning decreases the resources needed for transferring large amounts of data to a central server, which results in improved energy efficiency." | ||
t-participant: "Software Designer" | ||
t-artifact: "Decentralized device" | ||
t-context: "Machine Learning" | ||
t-feature: "Model Training" | ||
t-intent: "Apply federated learning if applicable" | ||
t-targetQA: "Energy efficiency" | ||
t-relatedQA: | ||
t-measuredimpact: | ||
t-source: "Master Thesis 'Green tactics for ML-important QAs' by Heli Järvenpää (2023)" | ||
t-source-doi: | ||
t-diagram: "consider-federated-learning.png" | ||
--- |
Oops, something went wrong.