Gradient of a convex and differentiable function.
Question
1) Gradient of a convex and differentiable function.
Select one or more:
- decreases when approaching the minimum.
- is the slope of the tangent line.
- is zero at a minimum.
- is non-zero at a maximum.
Answer
All of the options are correct.
The function will be continuous if it is both convex and differential.
If the function is convex downward, the graph will have a local-minima that also serves as a global-minima.
The function will have a local-maxima and a global-maxima if it is convex upward.
At a minimum, the gradient of such a function will approach zero and may or may not be zero. As a result, the statement Gradient of a convex differentiable function can be 0 at a minimum.
So, yes, it is correct.
Similarly, at its maximum, the gradient of such a function can be zero or not.
As a result, we cannot assert that it should be greater than zero.
However, it is possible that it is greater than zero.
The gradient is non-zero at maxima, according to this statement.
Because the function is descent as it approaches minimum, the gradient value decreases as it approaches the minimum value.
This is because, due to its convexity, the function tries to get parallel with the x-axis.
It is also self-evident that the gradient is the tangent line’s slope.
2) Identify the FALSE statement(s) about Neural Network.
Select one or more:
- Rectified linear unit is a linear activation function.
- The cross entropy can be used as the loss function for regression problem.
- The output layer must have a single neuron for binary classification problem.
- Sigmoid function map the input into a value in the range of -1 and +1.
Answer
Option 3 & 4 are incorrect.
The ReLU function is depicted in the graph above.
The Rectified Linear Unit (ReLU) is a piecewise linear activation function that reduces negative values to zero while keeping positive values constant, indicating that the assertion is correct.
The difference between two probability distributions is measured by cross-entropy.
When optimising classification models like logic regression, it can be utilized as a loss function.
As a result, this sentence is likewise correct.
The graph above depicts a sigmoid function, which maps values in the range of 0 to 1, rather than -1 to 1.
As a result, option 4 is incorrect.
For binary classification problems, there is no such restriction as having only one neuron in the output layer.
In the output layer of some of the difficulties, there are two neurons.
As a result, statement- 3 is incorrect.
As a result, 3 and 4 are false statements.
Also, read the Given below-defined UML class diagram.