User:Cloudmichael
The first introduction to vectors is as something having magnitude and direction. The general definition is given as a set of axioms that subsumes that original one. In a first course in linear algebra, theorems demonstrate that a vector has a dimension, and the vector may be manifested as a linear combination of linearly independant vectors. Matrices are introduced to transform the representation of a vector, and the matrix product is defined consistent with functional composition, in order to transform sequentially between sequential representations, which includes multiplying components from a field times an ordered n-tuple of linearly independant vectors to result in a linear combination. A vector space over a field is the set of linear combinations of all components from the field with a full set of linearly independant vectors generating that vector space. It is easy to see that the complex plane using the set of linearly independant vectors (1,i), is a vector space over the field of real numbers. Usually, a set of linearly independant vectors (let’s call the base vecors for brevity) are considered abstractly; but let’s consider some concrete examples. 1 and i satisfy the following multiplication (Cayley) table:
* | 1 | i |
1 | 1 | i |
i | i | -1 |
Similarly, the base vectors:
under ordinary matrix multiplication, and satisfy the following Cayley table:
* | I | J |
I | I | J |
J | J | -I |
Clearly, there is a one-to-one correspondence between the elements in these two spaces and between the mmappings produced by the binary operations defined in the spaces. These are simple examples of algebras and we have just seen what is termed homomorphisms (when a field, F , is specified the algebra is often termed an F-algebra).
Now that we are thinking of base vectors as matrices, consider:
    ⌈ | 1 | 0 | ⌉ | J=⌈ | 1 | 0 | ⌉ | |
I=⌈ | 1 | 0 | ⌉ | J=⌈ | 1 | 0 | ⌉ |
Raha Mutisya | 1493 |
Shalom Buraka | 3829 |
Hallie Curry | 8372 |
Shari Silberglitt | 4827 |
Olacapato, the tallest locality of Argentina (4090msnm), may be found on the routes of the Railroad General Manuel Belgrano of the branch of the Train of the clouds, next to national route Nº 51, to 2.5 km to the north of Olacapato Chico and to 45 km to the west of San Antonio de los Cobres (of the Coppers) .
The previous census of 1991 indicated a population of 186 inhabitants (INDEC, 2001), appearing as a rural dispersed population.
Postal code: A4413
The Cassano Particular Solution Formula
[edit]Concerning second order linear ordinary differential equations, it has long been well known that
So, if is a solution of: , then such that:
So, if is a solution of: ; then a particular solution of , is given by the Cassano particular solution formula:
- .
Concerning second order linear ordinary differential equations, it is well known that
So, if is a solution of: , then such that:
So, if is a solution of: ; then a particular solution of , is given by:
.
Polyanin, Andrei D. (2003). Handbook of Exact Solutions for Ordinary Differential Equations, 2nd. Ed. Chapman & Hall/CRC. ISBN 1-5848-8297-2. {{cite book}}
: Unknown parameter |coauthors=
ignored (|author=
suggested) (help)
"PARTICULAR SOLUTIONS TO INHOMOGENEOUS SECOND ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS". Retrieved 28 Nov 2005.
Cassano, Claude M. (2004). Analysis On Vector Product Spaces. COORDS.
- Lincoln, Abraham; Grant, U. S.; & Davis, Jefferson (1861). Resolving Family Differences Peacefully (3rd ed.). Gettysburg: Printing Press. ISBN 0-12-345678-9.
The inhomogenous four-vector Klein-Gordon equation is simply the Klein-Gordon equation in each of the four-vector components - in the presence of current densities. When these components are complex (or doublets) it esentially has eight degrees of freedom (dimensions)(although since the trivial solution is always a solution, extra dimensions are not necessarily imposed. Analogously to Maxwell's equations (and the inhomogeneous four-vector wave equation), the four-vector inhomogeneous Klein-Gordon equation may be written as a matrix product as follows:
The four-vector wave equation may be written as a matrix product. This thus, when operated
on a four-vector, gives a matrix product definition for the d'Alembertian operator. The form may,
of course, vary by respective columns and rows, as well as transformations, but one form is:
identity subspaces
For -algebra has an identity subspace whenever there exists:
a Left Handed Identity Element ,
a Right Handed Identity Element , or
a Single Identity Element :
, respectively.
Whenever this identity subspace exists, it is denoted: .
Note: is used to represent an Identity Element of any type.
For -algebra has an identity subspace , and
,
where: ;
a conjugate ( ∗ ) of may be defined by:
Clearly: are linearly independent, and, may be expressed as a linear combination of linearly independent vectors, not including .
So:
,
.
For -algebra , if has an identity subspace , given by , the set of all vectors within expressible as a linear combination of vectors not including , is denoted: .
For -algebra , if has an identity subspace , then its identity subspace type, is defined as follows:
LEFT : whenever there is a Left Handed Identity Element ,
RITE : whenever there is a Right Handed Identity Element ,
BOTH : whenever there is a Left Handed Identity Element , and Right Handed Identity Element ,
NONE : whenever there is NO Identity Element .
Definition 2-17: For an algebra ((R,⊛,S),∘), the ⊙-product is defined such that:
∀u,v∈S,u⊙v≡(1/2)[(u+[u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v
+(v+[v^{∗}-v]δ_{I_{X}∘v}^{v∘I_{X}})∘u],
where: δ_{I_{X}∘v}^{v∘I_{X}}=δ_{I_{X}∘u}^{u∘I_{X}}=0, whenever no identity subspace exists.
Definition 2-18: For an algebra ((R,⊛,S),∘) , the ×-product is defined such that:
is defined such that:
∀u,v∈S,u×v≡(1/2)[(u+[u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v
-(v+[v^{∗}-v]δ_{I_{X}∘v}^{v∘I_{X}})∘u],
where: δ_{I_{X}∘v}^{v∘I_{X}}=δ_{I_{X}∘u}^{u∘I_{X}}=0, whenever no identity subspace exists.
Theorem 2-9: For an algebra ((R,⊛,S),∘) , with ⊙-product and ×-product:
∀u,v∈S,u⊙v=v⊙u
∀u,v∈S,u×v=-v×u
Theorem 2-10: For an algebra ((R,⊛,S),∘) , with ⊙-product and ×-product:
∀u,v∈S,u∘v=u⊙v+u×v-([u^{∗}-u]δ_{I_{X}∘u}^{u∘I_{X}})∘v
Theorem 2-11: For an R-algebra ((R,⊛,S),∘) , with ⊙-product and ×-product:
if the identity subspace type is BOTH, then:
∀u,v∈S,u∘v=u^{∗}⊙v+u^{∗}×v
∀u,v∈S,u⊙v=(1/2)(u^{∗}∘v+v^{∗}∘u)
∀u,v∈S,u×v=(1/2)(u^{∗}∘v-v^{∗}∘u) ,
if the identity subspace type is NOT BOTH, then:
∀u,v∈S,u∘v=u⊙v+u×v
∀u,v∈S,u⊙v=(1/2)(u∘v+v∘u)
∀u,v∈S,u×v=(1/2)(u∘v-v∘u)
Theorem 2-12: For an R-algebra ((R,⊛,S),∘) , with ×-product,
if its identity subspace type is BOTH, then:
∀u,v∈S,u×v=u_{t}⊛v_{s}-v_{t}⊛u_{s}+u_{s}×v_{s} .
Theorem 2-13: For an R-algebra ((R,⊛,S),∘) , with ⊙-product,
if its identity subspace type is BOTH, then:
∀u,v∈S,u⊙v=(u_{t}v_{t})⊛I_{B}+u_{s}⊙v_{s} .
Definition 2-19: For an R-algebra ((R,⊛,S),∘) , with ⊙-product and ×-product:
if ∀u,v∈S_{s}:
u×v∈S_{s} , and
u⊙v∈S_{t} ,
then the R-algebra has a space-time structure.
Theorem 2-14: For an R-algebra ((R,⊛,S),∘) , with ⊙-product and ×-product:
if its identity subspace type is BOTH, and
the R-algebra has a space-time structure, then:
∀u,v∈S,u⊙v∈S_{t} , and
∀u,v∈S,u×v∈S_{s} .
Definition 2-24: A Lorentz-Minkowski Vectric Space is a vectric space,
(((R,⊛,S),∘),d) , with space-time structure, such that:
∀u,v∈S , u×v∈S_{s} ,
u⊙v∈S_{t} , and
u_{t}⊙v_{s}+u_{s}⊙v_{t}=0
Definition 3-9: An associative F-algebra ((F,⊛,S),∘)^{A} is an F-algebra ((F,⊛,S),∘)
such that:
∀x,y,z∈S, (x∘y)∘z=x∘(y∘z) .
Theorem 3-13a: In an n-dimensional associative R-algebra ((R,⊛,S)_{n},∘)^{A} , with
product coeffs β_{ij}^{m} , contravariant basis: (u_{i})_{n}∋
u_{j}=u_{j}(x^{i})_{n} , (∀j∈Z⁺∋1≤j≤n) , and vector function
f:Rⁿ⇉Rⁿ ,f=f(f^{j}(x^{i})_{n})_{n}=f(x) ;
if: ∃f₁∈Rⁿ∋f₁=δf∘(δr)⁻¹ , is continuous for arbitrary δr ,
then: ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})=0
Proof:
f₁∘δr=(δf∘(δr)⁻¹)∘δr=δf∘((δr)⁻¹∘δr)=δf .
So:
f₁∘∑_{h=1}ⁿ(u_{h}δx^{h})=∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))δx^{h}⊛u_{j}) ,
for arbitrary δx^{h} , so, ∀h∈Z⁺∋1≤h≤n:
f₁∘u_{h}=∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))⊛u_{j}) ,
and, so, in the limit (A5-1 & 2), as x→ξ :
f₁∘u_{h}=∑_{j=1}ⁿ((((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(f^{k}{j/(hk)}))⊛u_{j}) ,
( using Definition 3-7a ),
or:
f₁∘u_{h}=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j}) .
Now, ∀h,k∈Z⁺∋1≤h,k≤n :
(f₁∘u_{h})∘u_{k}=f₁∘(u_{h}∘u_{k}) .
∴
(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})∘u_{k}=f₁∘(∑_{m=1}ⁿ(β_{hk}^{m}⊛u_{m}))
=∑_{m=1}ⁿ(β_{hk}^{m}⊛(f₁∘u_{m}))
=∑_{m=1}ⁿ(β_{hk}^{m}⊛(∑_{j=1}ⁿf_{;m}^{j}⊛u_{j}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}⊛u_{j})) (∗1)
(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})∘u_{k}=∑_{j=1}ⁿ(f_{;h}^{j}⊛(u_{j}∘u_{k}))
=∑_{j=1}ⁿ(f_{;h}^{j}⊛(∑_{m=1}ⁿβ_{jk}^{m}⊛u_{m}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{j}β_{jk}^{m}⊛u_{m}))
=∑_{m=1}ⁿ(∑_{j=1}ⁿ(f_{;h}^{m}β_{mk}^{j}⊛u_{j}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{m}β_{mk}^{j}⊛u_{j})) (∗2)
∴
∑_{j=1}ⁿ(∑_{m=1}ⁿ((β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})⊛u_{j}))=0
and, so:
∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{mk}^{j}f_{;h}^{m})=0 □.
Theorem 3-13b: In an n-dimensional associative R-algebra ((R,⊛,S)_{n},∘)^{A} , with
product coeffs β_{ij}^{m} , contravariant basis: (u_{i})_{n}∋
u_{j}=u_{j}(x^{i})_{n} , (∀j∈Z⁺∋1≤j≤n) , and vector function
f:Rⁿ⇉Rⁿ ,f=f(f^{j}(x^{i})_{n})_{n}=f(x) ;
if: ∃f₂∈Rⁿ∋f₂=lim_{dr→0}(dr)⁻¹∘df , is continuous,
then: ∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{hm}^{j}f_{;k}^{m})=0
Proof:
If ∃f₂∈S∋f₂=lim_{r→0}(dr)⁻¹∘df
then
lim_{r→0}(dr∘f₂)=lim_{r→0}(dr∘lim_{r→0}((dr)⁻¹∘df))=lim_{r→0}((dr∘(dr)⁻¹)∘df)=lim_{r→0}df
∴
lim_{r→0}∑_{h=1}ⁿ(u_{h}dx^{h})∘f₂=
=lim_{r→0}∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(∑_{m=1}ⁿ(f^{k}L_{m}^{j}((∂/(∂x^{h}))Γ_{k}^{m}|_{x=ξ})))dx^{h}⊛u_{j})
=lim_{r→0}∑_{j=1}ⁿ(∑_{h=1}ⁿ(((∂f^{j})/(∂x^{h}))+∑_{k=1}ⁿ(f^{k}{j/(hk)}))dx^{h}⊛u_{j})
for arbitrary dx^{h} , which are linearly independent, so:
u_{h}∘f₂=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j}) .
And, so, in the limit, as x→ξ :
∀h∈Z⁺∋1≤h≤n :
u_{h}∘f₂=∑_{j=1}ⁿ(f_{;h}^{j}⊛u_{j})
Now, ∀h,k∈Z⁺∋1≤h,k≤n :
u_{k}∘(u_{h}∘f₂)=(u_{k}∘u_{h})∘f₂ .
∴
u_{k}∘(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})=(∑_{m=1}ⁿ(β_{kh}^{m}⊛u_{m}))∘f₂
=∑_{m=1}ⁿ(β_{kh}^{m}⊛(u_{m}∘f₂))
=∑_{m=1}ⁿ(β_{kh}^{m}(∑_{j=1}ⁿf_{;m}^{j}⊛u_{j}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(β_{kh}^{m}f_{;m}^{j}⊛u_{j})) (∗1)
u_{k}∘(∑_{j=1}ⁿf_{;h}^{j}⊛u_{j})=∑_{j=1}ⁿ(f_{;h}^{j}⊛(u_{k}∘u_{j}))
=∑_{j=1}ⁿ(f_{;h}^{j}⊛(∑_{m=1}ⁿβ_{kj}^{m}⊛u_{m}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{j}β_{kj}^{m}⊛u_{m}))
=∑_{m=1}ⁿ(∑_{j=1}ⁿ(f_{;h}^{m}β_{km}^{j}⊛u_{j}))
=∑_{j=1}ⁿ(∑_{m=1}ⁿ(f_{;h}^{m}β_{km}^{j}⊛u_{j})) . (∗2)
Therefore,
∑_{j=1}ⁿ(∑_{m=1}ⁿ((β_{kh}^{m}f_{;m}^{j}-β_{km}^{j}f_{;h}^{m})⊛u_{j}))=0 ,
and, so:
∑_{m=1}ⁿ(β_{hk}^{m}f_{;m}^{j}-β_{hm}^{j}f_{;k}^{m})=0 □.
In this article, linear analysis is the study of the analysis of a linear function of a linear variable - analogous to the study of the analysis of a complex function of a complex variable.
A holomorphic space is a vector space , (where the elements of the vector space belong to the set , the elements of the field of the vector space belong to the field , and is the binary operation between the elements of and the elements of ) within which a product is defined , by , such that:
anti-symmetry: (i)
bilinearity: (ii)
homogeneity: (iii) .
The Weighted Matrix Product
[edit]Cassano, Claude M. (2004). Analysis On Vector Product Spaces. COORDS.
The Weighted Matrix Product, Weighted Matrix Multiplication is a generalization of ordinary matrix multiplication, in the following way.
Given a set of Weight Matrices, the Weighted Matrix Product of the matrix pair is given by:
- ,
where: c(A) is the number of columns of
The number of Weight Matrices is:
the number of columns of the left operand = the number of rows of the right operand
The number of rows of the Weight Matrices is:
number of rows of the left operand.
The number of columns of the Weight Matrices is:
the number of columns of the right operand.
The Weighted Matrix Product is defined only if the matrix operands are conformable in the ordinary sense.
The resultant matrix has the number of rows of the left operand , and the number of columns of the right operand .
NOTE:
Ordinary Matrix Multiplication is the special case of Weighted Matrix Multiplication, where all the weight matrix entries are 1s .
Ordinary Matrix Multiplication is Weighted Matrix Multiplication in a default "sea of 1s ", the weight matrices formed out of the "sea" as necessary.
NOTE:
The Weighted Matrix Product is not generally associative:
Weighted matrix multiplication may be expressed in terms of ordinary matrix multiplication, using matrices constructed from the constituent parts, as follows:
for mxp matrix: , and pxn matrix: ,
define:
then:
The Weighted Matrix product is especially useful in developing matrix bases closed under a (not necessarily associative) product (algebras).
As an example, consider the following developments: It is convenient (although not necessary) to begin with permutation matrices as the basis; since they are a known basis and about as simple as there is.
the complex plane
- is weighted matrix multiplication,
with weights:
then:
So:
Thus, is manifested a homomorphism between this and the complex plane.
Quaternions
- is weighted matrix multiplication,
with weights:
then
* | ||||
---|---|---|---|---|
× | 1 | i | j | k |
---|---|---|---|---|
1 | 1 | i | j | k |
i | i | −1 | k | −j |
j | j | −k | −1 | i |
k | k | j | −i | −1 |
Thus, is manifested a homomorphism between this and the space of quaternions.