AC
AnkiCollab
AnkiCollab
Sign in
Explore Decks
Helpful
Join Discord
Download Add-on
Documentation
Support Us
Notes in
Math for RAI 1
To Subscribe, use this Key
violet-nitrogen-jersey-ceiling-seventeen-black
Status
Last Update
Fields
Published
10/15/2024
The equation \( x^2 = -1 \) leads to the introduction of {{c1::imaginary numbers}}, denoted as \( i = \sqrt{-1} \).
Published
10/15/2024
{{c1::Complex numbers}} are expressed as \( z = a + ib \), where \( a \) is the real part and \( b \) is the imaginary part.
Published
10/15/2024
On the {{c1::Argand diagram}}, the complex number \( z = a + ib \) is represented as a point in the complex plane.
Published
10/15/2024
In polar form, a complex number \( z \) is written as \( r(\cos \theta + i \sin \theta) \), where \( r \) is the {{c1::modulus}} and \( \theta \) is t…
Published
10/15/2024
Euler's formula expresses a complex number as \( e^{i\theta} = \cos \theta + i \sin \theta \), combining {{c1::exponentials}} and {{c2::trigonometry}}…
Published
10/15/2024
The modulus of a complex number \( z = a + ib \) is given by {{c1::\( |z| = \sqrt{a^2 + b^2} \)}}
Published
10/15/2024
The {{c1::complex conjugate}} of a number \( z = a + ib \) is {{c2::\( z^* = a - ib \).}}
Published
10/15/2024
The product of two complex numbers \( z_1 = a + ib \) and \( z_2 = c + id \) is \( z_1 z_2 = (ac - bd) + i(ad + bc) \), showing the {{c1::distributive…
Published
10/15/2024
To divide two complex numbers, multiply by the {{c1::conjugate}} of the denominator: \( \frac{1}{i} = \frac{1}{i} \times \frac{-i}{-i} = -i \).
Published
10/15/2024
The logarithm of a complex number in polar form is {{c2::\( \log z = \log r + i\theta \),}} where \( r \) is the modulus and \( \theta \) is the …
Published
10/15/2024
A {{c1::vector}} is a mathematical object that has both magnitude and direction, and is represented by an ordered set of numbers.
Published
10/15/2024
The {{c1::dot product}} of two vectors \( \vec{u} \) and \( \vec{v} \) is \( \vec{u} \cdot \vec{v} = |u||v| \cos \theta \), where \( \theta \) is the …
Published
10/15/2024
The {{c1::cross product}} of two 3D vectors results in another vector that is orthogonal to both input vectors.
Published
10/15/2024
A {{c1::basis set}} is a set of linearly independent vectors that spans a vector space, allowing any vector in the space to be represented as a linear…
Published
10/15/2024
The {{c1::dimension}} of a vector space is the number of vectors in its basis.
Published
10/15/2024
A vector space is said to be {{c1::closed under addition and scalar multiplication}} if adding any two vectors or scaling them by a scalar still resul…
Published
10/15/2024
The {{c1::length}} or {{c2::norm}} of a vector \( \vec{v} \) is given by \( |\vec{v}| = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2} \).
Published
10/15/2024
A vector \( \vec{v} \) is normalized when its length is 1, and such a vector is called a {{c1::unit vector}}.
Published
10/15/2024
A {{c1::subspace}} is a subset of a vector space that is also a vector space under the same operations.
Published
10/15/2024
The {{c1::span}} of a set of vectors is the set of all possible linear combinations of those vectors.
Published
10/15/2024
A {{c1::linear transformation}} is a function that maps vectors to vectors while preserving vector addition and scalar multiplication.
Published
10/15/2024
A {{c1::matrix}} is used to represent a linear transformation from one vector space to another.
Published
10/15/2024
The product of a matrix and a vector results in a new vector, expressed as \( A \vec{v} = \vec{w} \), where \( A \) is the matrix representing the {{c…
Published
10/15/2024
In {{c1::matrix-vector multiplication}}, the matrix transforms the input vector by applying linear operations to its components.
Published
10/15/2024
A {{c1::state-space model}} represents a system's state using vectors, where each state vector describes the system at a given time.
Published
10/15/2024
In state-space models, the system dynamics are often written as {{c1::\( \vec{x}_{t+1} = A\vec{x}_t + B\vec{u}_t \),}} where \( \vec{x} \) is the…
Published
10/15/2024
The {{c1::identity matrix}} is a square matrix with 1's on the diagonal and 0's elsewhere, representing the identity transformation.
Published
10/15/2024
A {{c1::composite transformation}} is formed by applying multiple transformations in sequence, represented by the product of their matrices.
Published
10/15/2024
A {{c1::basis vector}} can be transformed by applying a matrix to it, producing a new vector in the transformed space.
Published
10/15/2024
In robotics, state-space models are used to predict future states of a robot based on its current state and inputs, often involving {{c1::linear algeb…
Published
10/15/2024
The {{c1::determinant}} of a square matrix provides a scalar value that summarizes the effect of the matrix on volume scaling.
Published
10/15/2024
A matrix with a zero {{c1::determinant}} is singular, meaning it does not have an inverse and compresses space into a lower dimension.
Published
10/15/2024
The determinant of a 2x2 matrix is computed as {{c1::\( \text{det}(A) = a_{11}a_{22} - a_{12}a_{21} \).}}
Published
10/15/2024
A matrix is {{c1::invertible}} if and only if its determinant is non-zero.
Published
10/15/2024
The {{c1::eigendecomposition}} of a matrix expresses the matrix as \( A = V \Lambda V^{-1} \), where \( V \) contains the {{c2::eigenvectors}} and \( …
Published
10/15/2024
{{c1::Eigenvalues}} are scalars that represent the factor by which the corresponding eigenvector is scaled during the linear transformation.
Published
10/15/2024
{{c1::Eigenvectors}} are vectors that remain in the same direction after the matrix transformation, although they may be scaled by {{c2::eigenvalues}}…
Published
10/15/2024
A {{c1::symmetric matrix}} always has real eigenvalues and {{c2::orthogonal}} eigenvectors.
Published
10/15/2024
If a matrix \( A \) has \( n \) distinct eigenvalues, then it has \( n \) {{c1::linearly independent}} eigenvectors.
Published
10/15/2024
The {{c1::characteristic polynomial}} of a matrix is used to find the eigenvalues and is given by \( \det(A - \lambda I) = 0 \).
Published
10/15/2024
{{c1::Singular value decomposition (SVD)}} is a general matrix decomposition technique that works for any \( m \times n \) matrix.
Published
10/15/2024
The SVD of a matrix \( A \) is expressed as {{c1::\( A = U \Sigma V^T \),}} where \( U \) and \( V \) are orthonormal matrices and \( \Sigma \) i…
Published
10/15/2024
In SVD, the matrix \( U \) contains the {{c1::left singular vectors}} of \( A \), while \( V \) contains the {{c2::right singular vectors}}.
Published
10/15/2024
The {{c1::singular values}} are the square roots of the eigenvalues of \( A^T A \) or \( A A^T \).
Published
10/15/2024
SVD is useful for data compression because matrices can be approximated by retaining only the largest {{c1::singular values}}.
Published
10/15/2024
The matrix \( U \) in SVD represents a rotation or reflection of the input space, while \( \Sigma \) represents {{c1::scaling}} along the new basis ve…
Published
10/15/2024
SVD can handle both {{c1::square}} and {{c2::rectangular}} matrices, making it more general than eigendecomposition.
Published
10/15/2024
The {{c1::rank}} of a matrix is equal to the number of non-zero singular values in its SVD.
Published
10/15/2024
The matrix \( V \) in SVD {{c1::transforms}} the input space, and the matrix \( U \) {{c2::maps the scaled result}} to the output space.
Published
10/15/2024
A {{c1::system of linear equations (SLE)}} consists of multiple linear equations that must be solved simultaneously for the unknown variables.
Published
10/15/2024
The {{c1::Gaussian elimination}} method systematically reduces a system of equations to an upper triangular form to solve for the unknowns.
Published
10/15/2024
{{c1::Cramer’s rule}} uses determinants to solve a system of linear equations, but it is computationally expensive for large systems.
Published
10/15/2024
A system of equations has {{c1::no solution}} if the equations are {{c2::inconsistent}}, meaning there is no set of values for the unknowns that satis…
Published
10/15/2024
The {{c1::inverse}} of a matrix can be used to solve the system \( A \vec{x} = \vec{b} \) by calculating \( \vec{x} = A^{-1} \vec{b} \), if the matrix…
Published
10/15/2024
{{c1::Gaussian elimination}} consists of forward elimination to form a {{c2::triangular matrix}} and back substitution to solve for the unknowns.
Published
10/15/2024
{{c1::Partial pivoting}} is used in Gaussian elimination to improve numerical stability by swapping rows to maximize the pivot element.
Published
10/15/2024
A matrix is {{c1::singular}} if its determinant is zero, which means the system of linear equations either has {{c2::no solution}} or {{c3::infinitely…
Published
10/15/2024
The number of solutions to a system of linear equations depends on the {{c1::rank}} of the coefficient matrix and the augmented matrix.
Published
10/15/2024
In MATLAB, the {{c1::backslash operator}} (A\b) provides a numerically stable way to solve systems of linear equations efficiently.
Published
10/15/2024
The {{c1::inverse}} of a matrix \( A \) is denoted \( A^{-1} \), and satisfies \( A A^{-1} = I \), where \( I \) is the identity matrix.
Published
10/15/2024
Cramer’s rule allows us to solve systems of linear equations using {{c1::determinants}} but is {{c2::inefficient}} for large matrices.
Published
10/15/2024
The {{c2::Gaussian elimination}} approach reduces a matrix to {{c1::row echelon form}} to find its inverse.
Published
10/15/2024
The {{c1::condition number}} of a matrix gives a measure of how sensitive the matrix inverse is to small changes in the input.
Published
10/15/2024
To compute the inverse of a matrix, you can use {{c2::Gaussian elimination}} to transform it into the {{c1::identity matrix}}.
Published
10/15/2024
The inverse of a product of matrices \( ABC \) is the reverse of the product of their inverses: {{c1::\(C^{-1} B^{-1} A^{-1} \) }}.
Published
10/15/2024
The {{c1::singular value decomposition (SVD)}} is another approach to compute the inverse of matrices, even non-square ones.
Published
10/15/2024
The {{c1::Moore-Penrose pseudoinverse}} generalizes the inverse of a matrix, especially for non-invertible matrices.
Published
10/15/2024
A matrix is {{c1::non-invertible}} when it maps vectors to a lower-dimensional space, losing information in the process.
Published
10/15/2024
In singular value decomposition (SVD), the pseudoinverse is computed by taking the reciprocal of each {{c1::non-zero singular value}}.
Published
10/15/2024
The pseudoinverse of a matrix can be used to find a {{c1::least-squares solution}} to systems of equations that have no exact solution.
Published
10/15/2024
The {{c2::rank}} of a matrix is the dimension of its image, and the {{c1::nullity}} is the dimension of its kernel.
Published
10/15/2024
The pseudoinverse of a matrix minimizes the {{c1::error}} in systems with no solutions, providing the closest possible solution.
Published
10/15/2024
For a square matrix, the {{c2::pseudoinverse}} and the {{c2::inverse}} are the same when the matrix is {{c1::invertible}}.
Published
10/15/2024
Non-invertible transformations often reduce the dimensionality of the input, resulting in {{c1::linearly dependent}} output vectors.
Published
10/15/2024
The pseudoinverse provides a solution that minimizes the {{c1::Euclidean norm}} of the error when the system has no solutions.
Published
10/15/2024
The derivative of a function \( f(x) \), denoted \( f'(x) \), measures the {{c1::rate of change}} of \( f(x) \) with respect to \( x \).
Published
10/15/2024
The second derivative, \( f''(x) \), represents the rate of change of the {{c1::first derivative}}, and helps identify concavity.
Published
10/15/2024
A function has a {{c1::stationary point}} where its derivative is zero, indicating a potential maximum, minimum, or saddle point.
Published
10/15/2024
The chain rule allows us to differentiate a {{c2::composite function}} by taking the derivative of the outer function and multiplying by the derivativ…
Published
10/15/2024
For a quadratic function \( f(x) = ax^2 + bx + c \), the derivative is {{c1::\( f'(x) = 2ax + b \),}} showing how the slope changes linearly with…
Published
10/15/2024
The {{c1::product rule}} is used to differentiate products of two functions, and states that {{c2::\( (uv)' = u'v + uv' \).}}
Published
10/15/2024
To find the second derivative, you apply the limit definition of the derivative to the {{c1::first derivative}}.
Published
10/15/2024
The {{c1::power rule}} states that the derivative of \( x^n \) is \( n \cdot x^{n-1} \).
Published
10/15/2024
A {{c1::critical point}} occurs when the derivative is zero or undefined, and can indicate a maximum, minimum, or inflection point.
Published
10/15/2024
The {{c1::Hessian matrix}} is a matrix of {{c2::second-order partial derivatives}} and is used to analyze the curvature of multivariable functions.
Published
10/15/2024
{{c1::Integration}} is the reverse process of differentiation, often referred to as anti-differentiation.
Published
10/15/2024
The integral of a function can be interpreted as the {{c1::area under a curve}} between two points.
Published
10/15/2024
The indefinite integral of \( f(x) \), denoted \( \int f(x)dx \), includes a constant of integration, denoted as {{c1::\( + C \)}}.
Published
10/15/2024
The process of finding the area under a curve can be formalized using {{c1::definite integrals}} with specific limits.
Published
10/15/2024
The {{c1::Fundamental Theorem of Calculus}} links differentiation and integration, stating that integration reverses the process of differentiation.
Published
10/15/2024
The {{c1::trapezoidal rule}} is a numerical method that approximates the integral by calculating the area of trapezoids under the curve.
Published
10/15/2024
{{c1::Simpson’s rule}} improves the accuracy of numerical integration by fitting parabolas between data points instead of straight lines.
Published
10/15/2024
In integration by parts, the formula is {{c1::\( \int u dv = uv - \int v du \),}} where one part of the function is differentiated and the other is in…
Published
10/15/2024
The {{c1::chain rule}} for differentiation can be inverted in the process of integration by substitution.
Published
10/15/2024
The method of {{c1::integration by substitution}} simplifies integration by changing variables to express the integral in a more manageable form.
Published
10/15/2024
An {{c1::ordinary differential equation (ODE)}} is an equation that involves the derivatives of a function and relates a dependent variable to one ind…
Published
10/15/2024
A {{c1::homogeneous ODE}} is an equation where all terms involve the dependent variable or its derivatives, and the right-hand side is zero.
Published
10/15/2024
A {{c1::first-order ODE}} involves only the first derivative of the dependent variable, while a second-order ODE involves the second derivative.
Published
10/15/2024
The method of {{c1::separation of variables}} is used to solve ODEs by separating the variables and integrating both sides independently.
Published
10/15/2024
The solution to an ODE generally includes a constant of integration, which can be determined if {{c1::initial conditions}} are provided.
Published
10/15/2024
An ODE is classified as {{c1::explicit}} if the highest-order derivative is isolated on one side of the equation.
Published
10/15/2024
A {{c1::linear ODE}} takes the form of a linear combination of the dependent variable and its derivatives.
Published
10/15/2024
In the method of {{c1::superposition}}, the general solution to a linear ODE is the sum of linearly independent solutions.
Published
10/15/2024
The solution to a second-order ODE for oscillatory systems can often be written in terms of {{c1::trigonometric functions}}, representing oscillations…
Status
Last Update
Fields