Wikipedia:Reference desk/Archives/Mathematics/2019 April 26
Appearance
Mathematics desk | ||
---|---|---|
< April 25 | << Mar | April | May >> | April 27 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 26
[edit]Wedge product
[edit]I am having difficulty with exterior products and differential forms. I have an expression like and want to express this in terms of and I have a linear relationship where is a matrix. So in this case I have . But I'm having difficulty expressing this in a nice terse way. Does Spivak deal with this? If so, I don't see it. Can anyone point to a better way of thinking about this? Thanks, Robinh (talk) 05:35, 26 April 2019 (UTC)
- First, ω is a 3-form and dy1∧...∧dyk is a k-form, so unless k=3 they don't live in the same space. I'm assuming you want ω as an expression in dyp∧dyq∧dyr, 1≤p<q<r≤k, which would have Choose(k, 3) terms and each coefficient would be a 3×3 determinant in the Mij's. Not very terse so maybe you're better off leaving it as dx1∧dx2∧dx3. --RDBury (talk) 18:17, 26 April 2019 (UTC)
- (OP) thanks for this. The problem is, there are a lot of coefficients, each one of which is a pretty complicated expression. And things get worse if I have something like . All of which seems like an awful lot of work for what is conceptually very simple (in my case an orthogonal transformation of my coordinate system), which made me think I've missed something obvious. Maybe not though! Robinh (talk) 19:34, 26 April 2019 (UTC)
- Given a linear transformation F from V to W there is a corresponding transformation ∧k(F) from ∧k(V) to ∧k(W). (See Exterior algebra#Functoriality.) As you say this isn't hard conceptually but I think it would be hard to write it out in terms of coordinates. The matrix for ∧k(F) would be Choose(m, k) by Choose(n, k) and I'm guessing each entry would be a k×k determinant. If F was a transformation of V and you picked a basis so that the matrix of F was diagonal then I imagine that things would simplify considerably. I'm not up on spectral theory though, so I'm not sure what the status is of orthogonal matrices with respect to being diagonalizable, Spectral theorem may help with that. --RDBury (talk) 23:56, 26 April 2019 (UTC)
- Every orthogonal matrix is diagonalizable with unit-modulus eigenvalues. If M is diagonal, the sums above become considerably simpler.--Jasper Deng (talk) 08:07, 29 April 2019 (UTC)
- Thanks for this Jasper. How does diagonalizability help here? If my matrix is where 'D' is diagonal, how does this help me? Is 'P' of some form that differential substitution becomes simpler? Robinh (talk) 03:37, 30 April 2019 (UTC)
- I meant if M is itself a diagonal matrix with respect to these bases of the dual spaces (i.e. spaces of 1-forms), in which case .
- It may also in general be useful to think of the exterior product as a quotient (by the two-sided ideal for all x in the original tensor product space) of the tensor product of several copies of the dual space. The extension of the linear mapping induced by M to the tensor product is much more straightforward and you can pass to the quotient.--Jasper Deng (talk) 05:59, 30 April 2019 (UTC)
- Thanks for this Jasper. How does diagonalizability help here? If my matrix is where 'D' is diagonal, how does this help me? Is 'P' of some form that differential substitution becomes simpler? Robinh (talk) 03:37, 30 April 2019 (UTC)
- Every orthogonal matrix is diagonalizable with unit-modulus eigenvalues. If M is diagonal, the sums above become considerably simpler.--Jasper Deng (talk) 08:07, 29 April 2019 (UTC)
- Given a linear transformation F from V to W there is a corresponding transformation ∧k(F) from ∧k(V) to ∧k(W). (See Exterior algebra#Functoriality.) As you say this isn't hard conceptually but I think it would be hard to write it out in terms of coordinates. The matrix for ∧k(F) would be Choose(m, k) by Choose(n, k) and I'm guessing each entry would be a k×k determinant. If F was a transformation of V and you picked a basis so that the matrix of F was diagonal then I imagine that things would simplify considerably. I'm not up on spectral theory though, so I'm not sure what the status is of orthogonal matrices with respect to being diagonalizable, Spectral theorem may help with that. --RDBury (talk) 23:56, 26 April 2019 (UTC)
- (OP) thanks for this. The problem is, there are a lot of coefficients, each one of which is a pretty complicated expression. And things get worse if I have something like . All of which seems like an awful lot of work for what is conceptually very simple (in my case an orthogonal transformation of my coordinate system), which made me think I've missed something obvious. Maybe not though! Robinh (talk) 19:34, 26 April 2019 (UTC)