python - Scipy Null Space Innacurate only for Complex Values - Stack Overflow

admin2025-04-25  3

I've got a program that computes the null space of a matrix using scipy null_space. My code works absolutely perfectly when the matrix is real but seems to contradict results in MATLAB for complex matrices.

For instance,

[[ 1.        +0.j   0.        +0.j   0.        +0.j  -0.28867513+0.5j]
 [ 0.        +0.j   1.        +0.j   0.        +0.j  -0.28867513-0.5j]
 [ 0.        +0.j   0.        +0.j   1.        +0.j   0.57735027-0.j ]]

when plugged into scipy.linalg.null_space gives,

[[ 0.24235958-0.32852474j]
 [ 0.16333098+0.37415192j]
 [-0.40569056-0.04562718j]
 [ 0.70267665+0.0790286j ]]

The exact same matrix in MATLAB gives,

0.2235 - 0.3134i
0.1596 + 0.3502i
-0.4151 + 0.2949i
0.6636 + 0.0639i

These are clearly not the same up to scaling, so what's happening ? Is scipy just not very accurate for complex matrices or am I doing something wrong ?? Again my code works absolutely perfectly when the matrices happen to be real. Thanks in advance !

I've got a program that computes the null space of a matrix using scipy null_space. My code works absolutely perfectly when the matrix is real but seems to contradict results in MATLAB for complex matrices.

For instance,

[[ 1.        +0.j   0.        +0.j   0.        +0.j  -0.28867513+0.5j]
 [ 0.        +0.j   1.        +0.j   0.        +0.j  -0.28867513-0.5j]
 [ 0.        +0.j   0.        +0.j   1.        +0.j   0.57735027-0.j ]]

when plugged into scipy.linalg.null_space gives,

[[ 0.24235958-0.32852474j]
 [ 0.16333098+0.37415192j]
 [-0.40569056-0.04562718j]
 [ 0.70267665+0.0790286j ]]

The exact same matrix in MATLAB gives,

0.2235 - 0.3134i
0.1596 + 0.3502i
-0.4151 + 0.2949i
0.6636 + 0.0639i

These are clearly not the same up to scaling, so what's happening ? Is scipy just not very accurate for complex matrices or am I doing something wrong ?? Again my code works absolutely perfectly when the matrices happen to be real. Thanks in advance !

Share Improve this question asked Jan 16 at 12:19 marcus1518marcus1518 414 bronze badges 1
  • Could you edit your question to include a minimal reproducible example in each language, such that we can copy/paste your code and reproduce the result you're seeing? – Wolfie Commented Jan 17 at 8:24
Add a comment  | 

2 Answers 2

Reset to default 5

Both matlab and scipy are well-tested pieces of software and I doubt that either is giving wrong or inaccurate results. The problem is almost certainly with your input. You really need to show the code that produced those results. I don't think what you have posted here is the matrix that you have input to matlab. Actually - see below - I suspect they differ in the element at the bottom right-hand corner. EDIT: I think the version you sent to matlab had imaginary part -0.5j, whereas the version given to scipy involved just 0.j. The real part is OK.

Your scipy.null_space vector correctly gives 0 when multiplied by the given matrix A; your matlab vector doesn't ... for that matrix.

Given that the matrix is in reduced-row-echelon form and has rank 3, it is not that difficult to work out, to within an arbitrary unit-magnitude complex factor, a single-vector generator for the null space by hand. The ratio of elements in the two solutions is a constant (complex) number, except for the third row. This would arise if the matrices sent to scipy.null_space and to matlab differ in a single element in the bottom right-hand corner.

The scipy.null_space vector is correctly normalised to 1.

I doubt that either piece of software is faulty. I think you should check that you have the same matrix in both codes.

import numpy as np

A = np.array( [
                [ 1.+0.j,  0.+0.j,  0.+0.j, -0.28867513+0.5j],
                [ 0.+0.j,  1.+0.j,  0.+0.j, -0.28867513-0.5j],
                [ 0.+0.j,  0.+0.j,  1.+0.j,  0.57735027-0.j ]
              ] )

python = np.array( [
                     0.24235958-0.32852474j,
                     0.16333098+0.37415192j,
                    -0.40569056-0.04562718j,
                     0.70267665+0.0790286j 
                    ] )

matlab = np.array( [
                     0.2235 - 0.3134j,
                     0.1596 + 0.3502j,
                    -0.4151 + 0.2949j,
                     0.6636 + 0.0639j 
                    ] )

print(     'A @ python =\n', A @ python )
print( '\n\nA @ matlab =\n', A @ matlab )
print( '\n\npython / matlab =\n', python / matlab )
print( '\n\nabs(python) =\n', np.linalg.norm( python ) )

Output:

A @ python =
 [ 6.71328548e-09-6.37871800e-09j  6.71328548e-09+3.62128200e-09j
 -6.39980441e-09+3.54772201e-09j]


A @ matlab =
 [-1.48162680e-05-4.63408070e-05j -1.48162680e-05-4.63408070e-05j
 -3.19703608e-02+3.31792682e-01j]


python / matlab =
 [1.06043801+0.01707621j 1.06065285+0.01698805j 0.59761752+0.53448467j
 1.06051995+0.01697013j]


abs(python) =
 1.0000000015891697

I don't think you should be bothered by the deviations among different programming languages. The difference might be caused by the precision of number representation, the algorithm under the hood, the hidden configuration of functions and so on. The only thing that matters is: Is the solved "null space" really consistent with its mathematical definition, i.e., M * x = 0?

Here are the behaviors of computing the null space of the same matrix but in multiple platforms, and the most important things is to verify the results.

As we can see, the outcomes of computing the null space of a matrix via different languages are not presented the same, but all of them meet the definition of "Null Sapce", if you check the matrix product M * x. In this sense, the solved null space makes sense and are good enough.


Matlab

A = [1.+0.j,  0.+0.j,  0.+0.j, -0.28867513+0.5j;
    0.+0.j,  1.+0.j,  0.+0.j, -0.28867513-0.5j;
    0.+0.j,  0.+0.j,  1.+0.j,  0.57735027-0.5j ]

x = null(A)

such that

x =

   0.2235 - 0.3134i
   0.1596 + 0.3502i
  -0.4151 + 0.2949i
   0.6636 + 0.0639i

and

>> A*x

ans =

   1.0e-15 *

   0.0278 + 0.0000i
  -0.0278 + 0.1665i
   0.0555 + 0.1110i

R

M <- cbind(
  c(1, 0, 0), 
  c(0, 1, 0), 
  c(0, 0, 1), 
  c(-0.28867513 + 0.5i, -0.28867513 - 0.5i, 0.57735027 - 0.5i)
)
x <- Conj(MASS::Null(t(M)))

such that

> x
                       [,1]
[1,]  0.1578535-0.35104189i
[2,]  0.2250844+0.31222613i
[3,] -0.3493225+0.37044977i
[4,]  0.6632680-0.06723087i

> M %*% x
                            [,1]
[1,] -2.220446e-16+2.220446e-16i
[2,]  2.775558e-17+1.110223e-16i
[3,]  0.000000e+00-1.110223e-16i

Python (scipy=1.15.1)

Given matrix A like below

from scipy.linalg import null_space
A = np.array(
    [
        [1.0 + 0.0j, 0.0 + 0.0j, 0.0 + 0.0j, -0.28867513 + 0.5j],
        [0.0 + 0.0j, 1.0 + 0.0j, 0.0 + 0.0j, -0.28867513 - 0.5j],
        [0.0 + 0.0j, 0.0 + 0.0j, 1.0 + 0.0j, 0.57735027 - 0.5j],
    ]
)
  • with lapack_driver="gesdd", e.g.,
x = null_space(A, lapack_driver="gesdd")

such that

x=
[[ 0.22039582-0.31555321j]
 [ 0.16307918+0.34864499j]
 [-0.41213333+0.29900733j]
 [ 0.6641982 +0.05731663j]]
A@x=
[[ 5.55111512e-17+0.00000000e+00j]
 [-4.51028104e-17+2.22044605e-16j]
 [ 5.55111512e-17+1.11022302e-16j]]
  • With lapack_driver="gesvd", e.g.,
x = null_space(A, lapack_driver="gesvd")

such that

x=
[[ 0.19245009-3.33333334e-01j]
 [ 0.19245009+3.33333334e-01j]
 [-0.38490018+3.33333334e-01j]
 [ 0.66666667-9.12968683e-18j]]
A@x=
[[ 5.55111512e-17-5.55111512e-17j]
 [-1.28956617e-17+0.00000000e+00j]
 [-5.55111512e-17+0.00000000e+00j]]
转载请注明原文地址:http://anycun.com/QandA/1745532490a90856.html