-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
129ac22
commit e0ff8e7
Showing
787 changed files
with
60,041 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,313 @@ | ||
<!DOCTYPE html><html lang="zh-CN"> | ||
<head> | ||
<meta http-equiv="content-type" content="text/html; charset=UTF-8" /> | ||
<meta http-equiv="X-UA-Compatible" content="IE=Edge" /> | ||
<meta charset="utf-8" /> | ||
<meta name="viewport" content="width=device-width, minimum-scale=1.0, maximum-scale=1.0" /> | ||
<title>CNN</title> | ||
<meta name="author" content="[email protected]" /> | ||
<meta name="copyright" content="SMRUCC genomics Copyright (c) 2022" /> | ||
<meta name="keywords" content="R#; CNN; MLkit" /> | ||
<meta name="generator" content="https://github.com/rsharp-lang" /> | ||
<meta name="theme-color" content="#333" /> | ||
<meta name="description" content="feed-forward phase of deep Convolutional Neural Networks..." /> | ||
<meta class="foundation-data-attribute-namespace" /> | ||
<meta class="foundation-mq-xxlarge" /> | ||
<meta class="foundation-mq-xlarge" /> | ||
<meta class="foundation-mq-large" /> | ||
<meta class="foundation-mq-medium" /> | ||
<meta class="foundation-mq-small" /> | ||
<meta class="foundation-mq-topbar" /> | ||
<style> | ||
|
||
.table-three-line { | ||
border-collapse:collapse; /* 关键属性:合并表格内外边框(其实表格边框有2px,外面1px,里面还有1px哦) */ | ||
border:solid #000000; /* 设置边框属性;样式(solid=实线)、颜色(#999=灰) */ | ||
border-width:2px 0 2px 0px; /* 设置边框状粗细:上 右 下 左 = 对应:1px 0 0 1px */ | ||
} | ||
.left-1{ | ||
border:solid #000000;border-width:1px 1px 2px 0px;padding:2px; | ||
font-weight:bolder; | ||
} | ||
.right-1{ | ||
border:solid #000000;border-width:1px 0px 2px 1px;padding:2px; | ||
font-weight:bolder; | ||
} | ||
.mid-1{ | ||
border:solid #000000;border-width:1px 1px 2px 1px;padding:2px; | ||
font-weight:bolder; | ||
} | ||
.left{ | ||
border:solid #000000;border-width:1px 1px 1px 0px;padding:2px; | ||
} | ||
.right{ | ||
border:solid #000000;border-width:1px 0px 1px 1px;padding:2px; | ||
} | ||
.mid{ | ||
border:solid #000000;border-width:1px 1px 1px 1px;padding:2px; | ||
} | ||
table caption {font-size:14px;font-weight:bolder;} | ||
</style> | ||
</head> | ||
<body> | ||
<table width="100%" summary="page for {CNN}"> | ||
<tbody> | ||
<tr> | ||
<td>{CNN}</td> | ||
<td style="text-align: right;">R# Documentation</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
<h1>CNN</h1> | ||
<hr /> | ||
<p style=" font-size: 1.125em; line-height: .8em; margin-left: 0.5%; background-color: #fbfbfb; padding: 24px; "> | ||
<code> | ||
<span style="color: blue;">require</span>(<span style="color: black; font-weight: bold;">R</span>); | ||
<br /><br /><span style="color: green;">#' feed-forward phase of deep Convolutional Neural Networks</span><br /><span style="color: blue;">imports</span><span style="color: brown"> "CNN"</span><span style="color: blue;"> from</span><span style="color: brown"> "MLkit"</span>; | ||
</code> | ||
</p> | ||
<p><p>feed-forward phase of deep Convolutional Neural Networks</p></p> | ||
<blockquote> | ||
<p style="font-style: italic; font-size: 0.9em;"> | ||
<p>feed-forward phase of deep Convolutional Neural Networks</p> | ||
</p> | ||
</blockquote> | ||
<div id="main-wrapper"> | ||
<table class="table-three-line"> | ||
<tbody><tr> | ||
<td id="n_threads"> | ||
<a href="./CNN/n_threads.html">n_threads</a> | ||
</td> | ||
<td><p>get/set of the CNN parallel thread number</p></td> | ||
</tr> | ||
<tr> | ||
<td id="cnn"> | ||
<a href="./CNN/cnn.html">cnn</a> | ||
</td> | ||
<td><p>Create a new CNN model</p> | ||
|
||
<p>Convolutional neural network (CNN) is a regularized type of feed-forward<br /> | ||
neural network that learns feature engineering by itself via filters <br /> | ||
(or kernel) optimization. Vanishing gradients and exploding gradients, <br /> | ||
seen during backpropagation in earlier neural networks, are prevented by <br /> | ||
using regularized weights over fewer connections.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="input_layer"> | ||
<a href="./CNN/input_layer.html">input_layer</a> | ||
</td> | ||
<td><p>The input layer is a simple layer that will pass the data though and<br /> | ||
create a window into the full training data set. So for instance if<br /> | ||
we have an image of size 28x28x1 which means that we have 28 pixels<br /> | ||
in the x axle and 28 pixels in the y axle and one color (gray scale),<br /> | ||
then this layer might give you a window of another size example 24x24x1<br /> | ||
that is randomly chosen in order to create some distortion into the<br /> | ||
dataset so the algorithm don't over-fit the training.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="regression_layer"> | ||
<a href="./CNN/regression_layer.html">regression_layer</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="conv_layer"> | ||
<a href="./CNN/conv_layer.html">conv_layer</a> | ||
</td> | ||
<td><p>This layer uses different filters to find attributes of the data that<br /> | ||
affects the result. As an example there could be a filter to find<br /> | ||
horizontal edges in an image.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="conv_transpose_layer"> | ||
<a href="./CNN/conv_transpose_layer.html">conv_transpose_layer</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="lrn_layer"> | ||
<a href="./CNN/lrn_layer.html">lrn_layer</a> | ||
</td> | ||
<td><p>This layer is useful when we are dealing with ReLU neurons. Why is that?<br /> | ||
Because ReLU neurons have unbounded activations and we need LRN to normalize<br /> | ||
that. We want to detect high frequency features with a large response. If we<br /> | ||
normalize around the local neighborhood of the excited neuron, it becomes even<br /> | ||
more sensitive as compared to its neighbors.</p> | ||
|
||
<p>At the same time, it will dampen the responses that are uniformly large in any<br /> | ||
given local neighborhood. If all the values are large, then normalizing those<br /> | ||
values will diminish all of them. So basically we want to encourage some kind<br /> | ||
of inhibition and boost the neurons with relatively larger activations. This<br /> | ||
has been discussed nicely in Section 3.3 of the original paper by Krizhevsky et al.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="tanh_layer"> | ||
<a href="./CNN/tanh_layer.html">tanh_layer</a> | ||
</td> | ||
<td><p>Implements Tanh nonlinearity elementwise x to tanh(x)<br /> | ||
so the output is between -1 and 1.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="softmax_layer"> | ||
<a href="./CNN/softmax_layer.html">softmax_layer</a> | ||
</td> | ||
<td><p>[*loss_layers] This layer will squash the result of the activations in the fully<br /> | ||
connected layer and give you a value of 0 to 1 for all output activations.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="relu_layer"> | ||
<a href="./CNN/relu_layer.html">relu_layer</a> | ||
</td> | ||
<td><p>This is a layer of neurons that applies the non-saturating activation<br /> | ||
function f(x)=max(0,x). It increases the nonlinear properties of the<br /> | ||
decision function and of the overall network without affecting the<br /> | ||
receptive fields of the convolution layer.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="leaky_relu_layer"> | ||
<a href="./CNN/leaky_relu_layer.html">leaky_relu_layer</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="maxout_layer"> | ||
<a href="./CNN/maxout_layer.html">maxout_layer</a> | ||
</td> | ||
<td><p>Implements Maxout nonlinearity that computes x to max(x)<br /> | ||
where x is a vector of size group_size. Ideally of course,<br /> | ||
the input size should be exactly divisible by group_size</p></td> | ||
</tr> | ||
<tr> | ||
<td id="sigmoid_layer"> | ||
<a href="./CNN/sigmoid_layer.html">sigmoid_layer</a> | ||
</td> | ||
<td><p>Implements Sigmoid nonlinearity elementwise x to 1/(1+e^(-x))<br /> | ||
so the output is between 0 and 1.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="pool_layer"> | ||
<a href="./CNN/pool_layer.html">pool_layer</a> | ||
</td> | ||
<td><p>This layer will reduce the dataset by creating a smaller zoomed out<br /> | ||
version. In essence you take a cluster of pixels take the sum of them<br /> | ||
and put the result in the reduced position of the new image.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="dropout_layer"> | ||
<a href="./CNN/dropout_layer.html">dropout_layer</a> | ||
</td> | ||
<td><p>This layer will remove some random activations in order to<br /> | ||
defeat over-fitting.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="full_connected_layer"> | ||
<a href="./CNN/full_connected_layer.html">full_connected_layer</a> | ||
</td> | ||
<td><p>Neurons in a fully connected layer have full connections to all<br /> | ||
activations in the previous layer, as seen in regular Neural Networks.<br /> | ||
Their activations can hence be computed with a matrix multiplication<br /> | ||
followed by a bias offset.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="gaussian_layer"> | ||
<a href="./CNN/gaussian_layer.html">gaussian_layer</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="sample_dataset"> | ||
<a href="./CNN/sample_dataset.html">sample_dataset</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="sample_dataset.image"> | ||
<a href="./CNN/sample_dataset.image.html">sample_dataset.image</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="auto_encoder"> | ||
<a href="./CNN/auto_encoder.html">auto_encoder</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="training"> | ||
<a href="./CNN/training.html">training</a> | ||
</td> | ||
<td><p>Do CNN network model training</p></td> | ||
</tr> | ||
<tr> | ||
<td id="ada_delta"> | ||
<a href="./CNN/ada_delta.html">ada_delta</a> | ||
</td> | ||
<td><p>Adaptive delta will look at the differences between the expected result and the current result to train the network.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="ada_grad"> | ||
<a href="./CNN/ada_grad.html">ada_grad</a> | ||
</td> | ||
<td><p>The adaptive gradient trainer will over time sum up the square of<br /> | ||
the gradient and use it to change the weights.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="adam"> | ||
<a href="./CNN/adam.html">adam</a> | ||
</td> | ||
<td><p>Adaptive Moment Estimation is an update to RMSProp optimizer. In this running average of both the<br /> | ||
gradients and their magnitudes are used.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="nesterov"> | ||
<a href="./CNN/nesterov.html">nesterov</a> | ||
</td> | ||
<td><p>Another extension of gradient descent is due to Yurii Nesterov from 1983,[7] and has been subsequently generalized</p></td> | ||
</tr> | ||
<tr> | ||
<td id="sgd"> | ||
<a href="./CNN/sgd.html">sgd</a> | ||
</td> | ||
<td><p>Stochastic gradient descent (often shortened in SGD), also known as incremental gradient descent, is a<br /> | ||
stochastic approximation of the gradient descent optimization method for minimizing an objective function<br /> | ||
that is written as a sum of differentiable functions. In other words, SGD tries to find minimums or<br /> | ||
maximums by iteration.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="window_grad"> | ||
<a href="./CNN/window_grad.html">window_grad</a> | ||
</td> | ||
<td><p>This is AdaGrad but with a moving window weighted average<br /> | ||
so the gradient is not accumulated over the entire history of the run.<br /> | ||
it's also referred to as Idea #1 in Zeiler paper on AdaDelta.</p></td> | ||
</tr> | ||
<tr> | ||
<td id="predict"> | ||
<a href="./CNN/predict.html">predict</a> | ||
</td> | ||
<td></td> | ||
</tr> | ||
<tr> | ||
<td id="CeNiN"> | ||
<a href="./CNN/CeNiN.html">CeNiN</a> | ||
</td> | ||
<td><p>load a CNN model from file</p></td> | ||
</tr> | ||
<tr> | ||
<td id="detectObject"> | ||
<a href="./CNN/detectObject.html">detectObject</a> | ||
</td> | ||
<td><p>classify a object from a given image data</p></td> | ||
</tr> | ||
<tr> | ||
<td id="saveModel"> | ||
<a href="./CNN/saveModel.html">saveModel</a> | ||
</td> | ||
<td><p>save the CNN model into a binary data file</p></td> | ||
</tr></tbody> | ||
</table> | ||
</div> | ||
<hr /> | ||
<div style="text-align: center;">[<a href="../index.html">Document Index</a>]</div> | ||
</body> | ||
</html> |
Oops, something went wrong.