
A COMPARATIVE STUDY OF GAUSS-SEIDEL AND NEWTON-RAPHSON METHODS: MODELLING AND PERFORMANCE ANALYSIS USING PYTHON PROGRAMMING
ABSTRACT
This study gives a comparative analysis of the Gauss-Seidel and Newton-Raphson methods for solving the system of linear equations. The methods are executed using Python programming to highlight their computational efficiency and versatility in solving some standard problems. The Gauss-Seidel method, which is an iterative approach ideal for diagonally dominant systems is compared with the Newton-Raphson method, which is known for its rapid convergence in nonlinear systems. Through performance analysis involving convergence rates, computational overheads, and accuracy, the study demonstrates the comparative advantages and limitations of both the methods. Python’s abilities, including its autograd module for the evaluation of Jacobian and its built-in libraries for numerical tasks has been used to enhance the implementation efficiency and reproducibility. This work will serve as a resource for the educational purposes and will provide a framework for researchers to model and analyse complex systems of equations.
KEYWORDS
Gauss-Seidel, Newton-Raphson, Python programming, iterative methods, computational performance, numerical analysis
1. INTRODUCTION
Numerical methods play a crucial role in solving complex mathematical problems that are difficult or sometimes impossible (example, Navier Stokes equation) to solve analytically (RAUH, 2001; Fascestti et al., 2021). Among these methods, the iterative techniques such as the Gauss-Seidel and Newton-Raphson methods have come on the top due to their strength and adaptability (Dumka et al., 2022; Faires and Burden, 2003). These methods have been inherent to the computational mathematics, engineering, and applied sciences (Alves et al., 2021). Thus, finding applications in the fields such as structural analysis, electrical circuit simulations, fluid mechanics, and machine learning/optimization problems (Rafiq et al., 2021).
The Gauss-Seidel method, named after Carl Friedrich Gauss and Philipp Ludwig von Seidel, become known in the mid-19th century as a modification of the Jacobi method for solving the linear systems of equations. This method shortens the computation time by iteratively updating variables in a consecutive manner. Therefore, improving the convergence under suitable conditions (Faires and Burden, 2003; Ahmadi et al., 2021; Miranker et al., 1967). The Newton-Raphson method which was originally introduced by Isaac Newton and later formalized by Joseph Raphson in the late 17th century, transformed the approach to solving nonlinear equations. It uses a combination of first-order Taylor series expansion and succeeding approximations to rapidly arrive at the roots of a function or solutions to the systems of nonlinear equations (Pho, 2022; Akram and Ul Ann, 2015).
These methods are essential in engineering and scientific computations. The Gauss-Seidel method is particularly effective for solving large, sparse linear systems encountered in finite element analysis, thermal simulations, and fluid flow problems (Moiseienko et al., 2023). Its simplicity and low memory requirements make it an ideal choice in resource-constrained environments. Conversely, the Newton-Raphson method excels in addressing nonlinear problems, such as power flow studies in electrical networks, structural mechanics, and robotic kinematics. Its rapid convergence, when applied to well-posed problems, ensures computational efficiency in high-stakes applications like control systems and real-time simulations (Zhang et al., 2025).
Python has come up as a leading programming language for scientific computing due to its simplicity, readability, and extensive libraries/modules (Pyhton, 2017; Pawar et al., 2022). Modules such as NumPy, SciPy, and Matplotlib provide robust tools for numerical computation, matrix operations, and data visualization (Bauckhage, 2020; Ranjani et al., 2019; Kanagachidambaresan et al., 2021). Its ability to handle large data, combined with its active developer community has ensured continuous improvement and support for the current and future cutting-edge research. By utilizing Python in this study, the aim is to provide replicable and accessible implementations of the Gauss-Seidel and Newton-Raphson methods to linear system of equations. Python’s integration with modern visualization tools also allows for clear representation of iterative processes and convergence behaviour which make it an ideal language for both the academic and industry.
While the Gauss-Seidel and Newton-Raphson methods are widely studied, this work distinguishes itself by offering a detailed comparative analysis through Python-based modelling. The novelty lies in the transparent implementation of these methods using Python, which serves as a bridge between theoretical understanding and practical application. The study also evaluates their performance across a range of problems, providing insights into their convergence characteristics, computational efficiency, and suitability for specific applications. Also, the addition of Python-based visualization tools highlights iterative progress and error metrics, which are often overlooked in studies.
2. METHODOLOGY
For a system of linear equations, as shown in Eqn. (1), the Gauss-Seidel method and Newton-Raphson methods and their algorithm will be explained in this section (Faires, 2003).

In the above equation, 𝑥𝑖 represents the unknowns, 𝑎𝑖𝑗 is the coefficientsof these unknowns, and 𝑏𝑖 is the right-side constants.
2.1 Gauss-Seidel Method
First, the unknown diagonal variable from each equation is fetched out andis evaluated one by one. The Gauss-Seidel iterative formula calculates eachunknown successively, using the most recent values of other variables asshown below (Epperson, 2021):

Each iteration calculates 𝑥1, 𝑥2, … , 𝑥𝑛 using the updated values as soon as they are available. This makes this method fast and hence has the capability to converge. Thereafter, these equations are solved repeatedly(based on the algorithm given below) to arrive at the final answer as follows:
• Initialize variables:
o Set the maximum number of iterations, N.
o Set the tolerance for convergence, 𝜀𝑠.
o Set the array of initial guess 𝑥𝑔.
• Define Equations:
o Define a function Eqns() that computes the new value for the variables at each iterations by using Eqn. (2).
o Return the new values as an array.
• Gauss-Siedel loop (until convergence or maximum number ofiterations):
o Start a loop form 1 to N.
o Update the current guess 𝑥𝑛 using the function 𝐸𝑞𝑛𝑠().
o Compute the error 𝜖 as the Euclidean norm of the difference between the old and new guess as 𝜀 = ||𝑥𝑛 − 𝑥𝑔||.
o If 𝜀 ≤ 𝜀𝑠, print the solution and number of iteration and exit theloop.
o Otherwise, update the guess 𝑥𝑔 for the next iteration by 𝑥𝑛.
o If 𝑁 iterations are reached without convergence, print that thesolution did not converge.
2.2 Newton-Raphson Method
For Newton-Raphson each equation must be written as a function𝑓(𝑥1, 𝑥2, … , 𝑥𝑛 ) = 0, thus a system of functions will be created as shown below (Kiusalaas, 2015):

The Newton-Raphson updating rule is as follows:
𝑥𝑘+1 = 𝑥𝑘 + Δ𝑥 (4)
The Δ𝑥 is the error at each iteration, which can be evaluated as follows:
Δ𝑥 = 𝐽(𝑥𝑘 )−1 . (−𝐹(𝑥𝑘)) (5)
Here, 𝐽(𝑥) is the Jacobian Matrix of 𝐹(𝑥) which is defined as:

Thus, the Newton-Raphson updates closely resemble Gauss-Seideliterations but uses different approach for the updating 𝑥, by incorporating(Δ𝑥) for faster convergence. Though it seems that it will be difficult toobtain the Jacobian matrix but in Python it can easily be done by usingautograd module [20]. The algorithm for Newton Raphson is as follows:
• Initialize variables:
o Set the maximum number of iterations, N.
o Set the tolerance for convergence, 𝜀𝑠.
o Set the array of initial guess 𝑥𝑔
• Define Equations:
o Define a function Fn() that computes the new value for the 𝐹(𝑥)variables at each iterations by using Eqn. (3).
o Return the new values new value for 𝑓1, 𝑓2 and so on.
• Newton-Raphson loop (until convergence or maximum numberof iterations):
o Start a loop form 1 to N.
o Compute the function 𝐹(𝑥) = 𝐹𝑛(𝑥𝑘) at the current guess 𝑥𝑘.
o Compute the Jacobian matrix 𝐽(𝑥ₖ) of the system at the currentguess 𝑥𝑘.o Solve for Δ𝑥 by using Eqn. (5).
o Compute the error 𝜖 as the Euclidean norm i.e. 𝜀 = ||Δ𝑥||.
o Evaluate the new value of 𝑥 for the current iteration by using Eqn.(4).
o If 𝜀 ≤ 𝜀𝑠, print the solution and number of iteration and exit theloop.
o Otherwise, update the guess 𝑥𝑔 for the next iteration.
If 𝑁 iterations are reached without convergence, print that the solution didnot converge.
3. METHODOLOGY
3.1 Python Functions for Gauss-Siedel and Newton Raphson
The Python function for the Gauss-Siedel is as follows:


The above Gauss_Siedel() function solves a system of nonlinear equations iteratively using the Gauss-Seidel algorithm. It takes as input the system of equations (Equation), an initial guess array (xg), a convergence tolerance (𝜀𝑠, default 1× 10−4), and the maximum number of iterations (Max_iter, default 100). Starting with the initial guess, the function updates the variables by evaluating the equations and computes the error as the norm of the difference between successive iterations. If the error moves below the tolerance, the function prints a message indicating convergence, returns the iteration count, and the solution. If convergence is not achieved within the maximum iterations, it prints a failure message. At each step, the previous guess is updated with the current solution for the next iteration.
The Python function for the Newton-Raphson and Jacobian are as follows:


The Newton_Raphson() function, as mentioned above, solves the system of linear equations using the Newton-Raphson method. It requires the equations’ function (𝐹𝑛()), their Jacobian (Jacobian function), an initial guess array (xg), a convergence tolerance (εs, default 1×10−4), and the maximum number of iterations (𝑀𝑎𝑥_𝑖𝑡𝑒𝑟, default 100). In each iteration, it calculates the function values (Fx) and the Jacobian matrix (Jx) at the current guess, then solves the linear system to compute the change in the solution (Δx). The solution is updated, and the error (ε) is calculated as the norm of the change. If the error is below the specified tolerance, the function will print a convergence message and will return the iteration count and solution. Otherwise, the guess is updated for the next iteration. If the solution does not converge within the maximum iterations, a message is printed indicating failure to converge.
This function relies on the accurate evaluation of Jacobian. The Jacobian is evaluated with the help of autograd module (Maclaurin et al., 2015). The Jacob (x) function computes the Jacobian matrix for a system of nonlinear equations using the 𝑎𝑢𝑡𝑜𝑔𝑟𝑎𝑑 module. It takes as input the function (𝐹𝑛()) representing the system of equations and an array (𝑥) of variable values. The function utilizes autograd. jacobian, which creates a callable object that computes the Jacobian matrix, thus representing the partial derivatives of each equation with respect to each variable. The Jacobian matrix is evaluated at the given input 𝑥 and will return a 2D array (Bradbury et al., 2021). The autograd module computes the Jacobian through automatic differentiation, where it traces the computational steps of the function and applies the chain rule efficiently. This ensures precise evaluation of the derivatives by generating derivatives through a computational graph which will avoid errors linked with the numerical methods, such as finite differences. Moreover, using native NumPy skips autograd’s tracking, leading to incorrect or failed computations. Therefore, autograd.numpy has been used to handle arrays. Thus, the modules called for both Gauss-Siedel and Newton-Raphson are: autograd and autograd.numpy (Liang et al., 2023). For the visualization of results Matplotlib has been used.
3.2 Implementation
•First autograd and autograd.numpy modules are imported
•Maximum number of iterations, error tolerance, and array of guess values are initialized.
•Equation function (𝐸𝑞𝑛𝑠()) is defined based on the input equations for Gauss-Siedel. Whereas for Newton-Raphson 𝐹(𝑥) (Fn(x)) is defined.
•Based on the scheme used the Gauss_Siedel() or Newton_Raphson() function will be called.
3.3 Test Cases
Following five types of problems have been taken to demonstrate the effectiveness of the models:
Case 1: Diagonally Dominant System
4𝑥1−𝑥2+𝑥3=15, −𝑥1+4𝑥2−𝑥3=8, 𝑥1−𝑥2+3𝑥3=10
Case 2: Symmetric Positive Definite System
10𝑥1+𝑥2+𝑥3=12, 𝑥1+7𝑥2+𝑥3=15, 𝑥1+𝑥2+5𝑥3=10
Case 3: Sparse Matrix
2𝑥1+3𝑥3=7, 3𝑥2+4𝑥3=8, 5𝑥1+𝑥2+2𝑥3=12
Case 4: Non-Diagonally Dominant System
𝑥1+2𝑥2−𝑥3=4, 3𝑥1−𝑥2+4𝑥3=10, 2𝑥1+3𝑥2+𝑥3=7
Case 5: Ill-conditioned System
0.001𝑥1+𝑥2=1, 𝑥1+2𝑥2=3
The Eqns() and Fn(x) functions for all the cases are summarized in the Table 1.


4. RESULT AND DISCUSSIONS
The performance comparison between the Gauss-Seidel and Newton-Raphson methods was done based on the number of iterations required and the accuracy of solutions for the given cases. Fig. 1 illustrates the number of iterations required by both methods for all cases. It is evident that Newton-Raphson consistently achieved convergence within just 2 iterations for all solvable cases, whereas Gauss-Seidel displayed variable performance. Specifically, Gauss-Seidel requires up to 7 iterations in Case 1 and has struggled to converge within the maximum limit of 100 iterations in Cases 3 and 5. This highlights the dependency of Gauss-Seidel on characteristics and nature of the problem.

The convergence behaviour for Case 1 is shown in Fig. 2. This emphasizes the convergence speed of Newton-Raphson over Gauss-Siedel. The error residuals dropped to the desired tolerance (10−6) in just 2 iterations for Newton-Raphson, while Gauss-Seidel requires 7 iterations to achieve the same precision. This displays the advantage of Newton-Raphson in terms of computational efficiency and reliability for this case.

A comparison of the final solutions for Case 1 is presented in Figure 3. The solutions obtained using both methods were highly consistent, with minor differences observed due to the iterative nature of Gauss-Seidel. This confirms that while Gauss-Seidel can achieve acceptable accuracy, its slower convergence rate and occasional failure to converge in challenging systems (e.g., Cases 3 and 5) limit its applicability.

For Case 2, both methods converged successfully, with Gauss-Seidel requiring 5 iterations and Newton-Raphson requiring only 2 (Refer Tab. 2). The solutions were consistent, confirming that both methods perform well under favourable conditions. However, in Case 3, the Gauss-Seidel failed to converge within the maximum allowed iterations (100), while Newton-Raphson achieved convergence in 2 iterations. This highlights the limitations of Gauss-Seidel in handling systems with challenging characteristics, such as sparse or non-diagonally dominant matrices.

In Case 4, both methods converged successfully, with Gauss-Seidel requiring only 2 iterations, matching the performance of Newton-Raphson. This demonstrates that under ideal conditions, the Gauss-Seidel can be as efficient as Newton-Raphson. In Case 5, however, Gauss-Seidel once again failed to converge within the iteration limit, while Newton-Raphson converged in 2 iterations, providing an accurate solution. Overall, the analysis across all test cases indicates that the Newton-Raphson consistently outperformed the Gauss-Seidel in terms of convergence speed and robustness. Its ability to handle challenging systems effectively makes it a better choice for solving linear equations, particularly in the applications which requires high precision and efficiency. However, Gauss-Seidel remains a practical alternative for the problems with favourable properties and limited computational resources.
5. CONCLUSIONS
The comparative study of the Gauss-Seidel and Newton-Raphson methods have revealed their strengths and applicability across different types of systems. The Gauss-Seidel method, with its simplicity and low computational cost, is well-suited for linear systems with convergence-friendly properties. In opposition, the Newton-Raphson method demonstrates superior efficiency for nonlinear problems, benefiting from its fast convergence, albeit at the expense of higher computational complexity due to Jacobian evaluation. Python’s robust libraries and features enable perfect implementation and has provided a simple platform for educational and research activities. This analysis not only offers insights into selecting appropriate iterative methods but also highlights the potential of Python programming for solving the complex engineering and scientific problems. Future work could explore hybrid approaches viz. combining these methods for enhanced performance in large-scale or stiff systems. Additionally, studies related to the scalability of these methods for large-scale sparse systems using parallel computing can also be done. Also, the studies on the application of these methods to the real-time systems can also be performed.
REFERENCES
Ahmadi, A., Manganiello, F., Khademi, A., Smith, M.C., 2021. A Parallel Jacobi-Embedded Gauss-Seidel Method, IEEE Trans. Parallel Distrib. Syst., 32, Pp. 1452–1464. https://doi.org/10.1109/TPDS.2021.3052091.
Akram, S., ul Ann, Q., 2015. Newton Raphson method calculator. Int. J. Sci. Eng. Res., 6, Pp. 1748–1752. https://atozmath.com/CONM/Bisection.aspx?q=nr&q1=3x%5E3-10x-14=0%60%60true%602%604%601%603&dp=4&do=1#PrevPart.
Alves, M.A., Oliveira, P.J., Pinho, F.T., 2021. Numerical Methods for Viscoelastic Fluid Flows, Annu. Rev. Fluid Mech. 53, Pp. 509–541. https://doi.org/10.1146/annurev-fluid-010719-060107.
Bauckhage, C., 2020. NumPy / SciPy Recipes for Data Science: Subset-Constrained Vector Quantization via Mean Discrepancy Minimization, Pp. 1–4.
Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., Zhang, Q., 2021. JAX: Autograd and XLA, Astrophys. Source Code Libr., ascl:2111.002. https://ui.adsabs.harvard.edu/abs/2021ascl.soft11002B.
Dumka, P., Dumka, R., Mishra, D.R., 2022. Numerical Methods Using Python, BlueRose.
Epperson, J.E., 2021. An introduction to numerical methods and analysis, John Wiley \& Sons. https://doi.org/10.1002/9781119604754.
Faires, J.D., Burden, R.L., 2003. Numerical methods., Thomson.
Fascetti, A., Feo, L., Abbaszadeh, H., 2021. A critical review of numerical methods for the simulation of pultruded fiber-reinforced structural elements, Compos. Struct. 273, Pp. 114284. https://doi.org/https://doi.org/10.1016/j.compstruct.2021.114284.
Kanagachidambaresan, G.R., Manohar Vinoothna, G., 2021. Visualizations, in: K.B. Prakash, G.R. Kanagachidambaresan (Eds.), EAI/Springer Innov. Commun. Comput., Springer International Publishing, Cham, pp. 15–21. https://doi.org/10.1007/978-3-030-57077-4_3.
Kiusalaas, J., 2015. Numerical Methods in Engineering with MATLAB®, Cambridge university press. https://doi.org/10.1017/cbo9781316341599.
Liang, L., Liu, M., Elefteriades, L., Sun, W., 2023. PyTorch-FEA: Autograd-enabled finite element analysis methods with applications for biomechanical analysis of human aorta, Comput. Methods Programs Biomed. 238, Pp. 107616. https://doi.org/https://doi.org/10.1016/j.cmpb.2023.107616.
Maclaurin, D., Duvenaud, D., Adams, R.P., 2015. Autograd: Effortless Gradients in Numpy, ICML ’15 AutoML Work. Pp. 3. http://www.scipy.org/.%0Ahttps://github.com/HIPS/autograd%0Ahttp://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/tutorials/tut4.pdf.
Miranker, W., Isaacson, E., Keller, H.B., 1967. Analysis of Numerical Methods, Courier Corporation. https://doi.org/10.2307/2003280.
Moiseienko, S., Tuchyna, U., Redchyts, D., Zaika, V., Vygodner, I., 2023. Comparative Analysis of Numerical Methods for Solving Linear Equation Systems for Poisson’s Equation, in: H. Altenbach, A.H.-D. Cheng, X.-W. Gao, А. Kostikov, W. Kryllowicz, P. Lampart, V. Popov, A. Rusanov, S. Syngellakis (Eds.), Adv. Mech. Power Eng., Springer International Publishing, Cham, Pp. 169–177.
Pawar, P.S., Mishra, D.R., Dumka, P., 2022. Solving First Order Ordinary Differential Equations using Least Square Method : A comparative study. Int. J. Innov. Sci. Res. Technol., 7, Pp. 857–864.
Pho, K.H., 2022. Improvements of the Newton–Raphson method, J. Comput. Appl. Math., 408, Pp. 114106. https://doi.org/https://doi.org/10.1016/j.cam.2022.114106.
Python, S.K.R., 2017. The Fastest Growing Programming Language. Int. Res. J. Eng. Technol., 4, Pp. 354–357.
Rafiq, N., Yaqoob, N., Kausar, N., Shams, M., Mir, N.A., Gaba, Y.U., Khan, N., 2021. Computer-Based Fuzzy Numerical Method for Solving Engineering and Real-World Applications, Math. Probl. Eng., Pp. 6916282. https://doi.org/https://doi.org/10.1155/2021/6916282.
Ranjani, J., Sheela, A., Pandi Meena, K., 2019. Combination of NumPy, SciPy and Matplotlib/Pylab-A good alternative methodology to MATLAB-A Comparative analysis, in: Proc. 1st Int. Conf. Innov. Inf. Commun. Technol. ICIICT, Pp. 1–5. https://doi.org/10.1109/ICIICT1.2019.8741475.
Rauh, A., 2001. Remarks on unsolved basic problems of the Navier-Stokes equations, Pp. 1–7. http://arxiv.org/abs/physics/0101027.
Zhang, H., Tan, B., She, D., Shi, L., 2025. An efficient method for solving flow field in high temperature gas-cooled reactor, Prog. Nucl. Energy 180, Pp. 105599. https://doi.org/https://doi.org/10.1016/j.pnucene.2024.105599.