A Sufficient Descent 3-Term Conjugate Gradient Method for Unconstrained Optimization Algorithm

In recent years, 3-term conjugate gradient algorithms (TT-CG) have sparked interest for large scale unconstrained optimization algorithms due to appealing practical factors, such as simple computation, low memory requirement, better sufficient descent property

Where  −1 denotes a positive step size and   represents the search direction.Typically, the direction is defined as follows: ∈ ℜ denotes the CG-method as a scalar parameter.Typically, the parameter   is chosen in such a way that (1.2) -(1.3) can be reverted to the linear CG-method [2].Similarly, if w(x) is a strongly convex quadratic function and an exact line search (ELS) is used, all parameters   in these methods will be identical.However, for nonquadratic functions, each parameter   results in significantly different performances of the corresponding methods.Although FR, DY, and CD methods have strong convergent properties, they may not perform well in practice due to jamming.Despite their poor convergent properties, PRP, HS, and LS methods frequently perform well.Furthermore, for many years, the PRP method, because it essentially restarts if a bad direction occurs, has been regarded as one of the most efficient CG methods in practical computation [3].The research results of the CG algorithm are very rich, including HS, FR, PRP, CD, LS, and DY [4][5][6][7][8][9][10], respectively.
However, because an ELS is usually not possible for large-scale problems, the SWPC conditions are widely used in the CG-method for establishing convergence results.Where 0 <  1 <  2 < 1 ,and   that is a path to the minimum must be descent [11].

𝑤(𝑥
The remainder of this work is structured as follows.The second section is devoted to the evolution of TT-CG.Following the introduction of the TT-CG proposed by many researchers, an MTTBRB-CG is presented in the third section.SWPC provides a corresponding algorithm as well as descent properties.The fourth section presents obtained preliminary numerical results with SWPC.Finally, the closing remarks are included in the final Section.

II-REVIEW OF CONNECTED WORKS
Many researchers have investigated   choices because it is well known that the choice of   affects the numerical performance of the method.(1.2) and (1.3) define the classical algorithms, which calculate the CG parameter as shown in (1.4).As modifications to the classical CG algorithms, many researchers have recently proposed a plethora of TT-CG methods for unconstrained optimization problems.[12] generalizes the CD method to produce (NTTCD), defined by: [13] proposes an MTT-PRP procedure and demonstrates its global convergence using the Armijo line search.
Where  −1 =   −  −1 ,the TT-HS method is created by [14] in a similar content.This is written as: The TT-HS method has the steepest descent capability; when an ELS is used, it is reduced to the classic HS method.Furthermore, a modified TT-HS algorithm on the search direction is employed to ensure the global convergence properties of the direction clarified in (1.7): Given that the modified TT-HS algorithm is used to indicate the search direction's global convergence properties in (1.7), it is easy to understand why (1.7) is not used to prove the search direction's global convergence properties.Instead of ignoring (1.7), it should be made efficient and globally convergent.As a result of this, (1.7) can be changed to satisfy the global convergence criteria.This modification is expected to outperform the modified TT-CG algorithm in terms of numerical effectiveness.[15] proposes a new Dai-Liao-based TT-CG method motivated by this appealing descent property: Where  0 = − 0 and  ≥ 0.
Although the PRP-CG method is widely viewed as one of the most productive CG-methods in practical computation, its convergence properties are not all that great.[16] proposes the VPRP method, a variant of the PRP method, and its parameter   is determined by: .
The general convergence results of the proposed formula with certain line searches, including the ELS, the Wolfe Powell line search, and the Grippo-Lucidi line search, are discussed.[17] proposes two TT-CG methods, TT-SMAR and TT-SMARZ, which are defined as follows: Not only do the modified methods have a good computational effect, but also they have all of the identical interesting properties as the FR method.In addition, [18] expands on the approach by proposing a TT-CG method.Using formulas, the researchers create new search directions. .
Where  ∈ [1, + ∞),  ∈ [, + ∞), TTBZAU is the method's name.TTBZAU uses Wolfe Powell line search to satisfy global convergence properties with convex and nonconvex functions.The method meets the sufficient descent condition regardless of the line search utilized.[19] recently suggests a TTCG method of RMIL-CG-method.The suggested technique is called TTRMIL method.The strategy's search direction is characterized by: The proposed method by [20] has the following direction:

III-THE NEW MTTBRB-CG METHOD AND THEORETICAL RESULTS
First, the researchers refer to the BRB-CG [21] method; the direction is defined by (3), and the formula    is defined by: The current TT-CG algorithm focuses on making minor changes to    by: 2) 3) The MTTBRB-CG method is the proposed TT-CG method, and the algorithm is explained further below.
Step 2: If ‖  ‖ ≤ , then end; or else, go on to the next step.
Step 6: If ‖  ‖ ≤ , stop; instead, move on to the next step.
Note that: The researchers maintain the descent condition and global convergence properties of the MTTBRB-CG method.First, standard Presumptions are made about the objective function.These Presumptions are going to be applied throughout paper.
A2-The function  is smooth and its gradient is Lipschitz continuous in a specific neighborhood ℕ of Υ; notably, there is a constant ℒ greater than zero so that: Since {(  )} is decreasing, it is obvious that the sequence {  } produced by the MTTBRB-CG method is stored in Υ.Furthermore, Presumption (A) results in that  is a positive constant using MTTBRB-CG algorithm, resulting in 0 < ‖  ‖ ≤ , ∀ ∈ Υ [22].This section discusses some of the most important properties of sufficient descent, as well as the global convergence of the MTTBRB-CG algorithm.The next theorem will be used to demonstrate that the presented MTTBRB-CG method meets the descent condition.From Presumption (A), it can be deducted that there is a small positive constant  greater than zero, resulting in 0 < ‖  ‖ ≤ , ∀ ∈ Υ . (2.6)

Theorem: (Descent condition)
Suppose the calculation of   and   is done by using MTTBRB-CG's algorithm under ILS in calculating the step length of  −1 and the sufficient descent condition stands true for all  ≥ 0      ≤ −‖  ‖ 2 , (2.7) Proof: For k = 0, it is obvious that the formulas of (2.6) are true.Take a look at the condition k ≥ 1.Similarly, (2.2) and (2.3) are obtained.
(2.8) By using the restart criteria, the following formula is obtained: (2.9) and when combining (2.4), (2.8) and (2.9) with arithmetic operations, the following formula is obtained: Since the third term is postive , so Following that, the study demonstrates that the proposed MTTBRB-CG -algorithm converges globally.

IV-NUMERICAL RESULTS
This section's primary responsibility is to notify on the effectiveness of the MTTBRB-CG algorithm on a collection of test problems.The codes, which used double precision arithmetic, were written in Fortran77.
The researchers tried small dimensions (n=100) and large dimensions (n=1000) of the variables with SWPC with  1 = 10 −3 and  2 = 0,9, respectively, in their experiments with 26 nonlinear unconstrained problems.A same test functions were used to compare the dependability of the current study algorithms to the well-known routines of Dx [24], LS [9], MMAU [20], RMIL [26], TTRMIL [19] and BRB [21].All of these methods were terminated when the following stopping criteria were met: ‖ +1 ‖ ≤ 1 × 10 −6 .If the iteration count is greater than 600, these routines are also forced to stop.

CONCLUSION
The recent CG method research has resulted in a number of modifications to this method.TTCG methods are a fascinating computational innovation that yields efficient conjugate gradient algorithms.A new MTTBRB-CG, a modification of the BRB formula, provides enough descent directions for the objective function in this work when combined with a SWPC.Using the same line search, the modified method's global convergence is discovered.Furthermore, numerical experiments show that the suggested methods are effective and outperform some traditional conjugate gradient methods, which use some test functions and inexact line search.
convergent condition) Suppose that Presumption (A) is correct.Let {  } be a point sequence generated by TTMMMWA.Then, there is lim →∞ inf‖  ‖ = 0.

Fig. ( 1
Fig. (1) depicts the progress of the new MTTBRB-CG algorithm relative to the calculated (NOI) of the test functions during the implementation of Dolan-More method with (n=100 & n=1000).

Fig. ( 2 )Fig. 2 :
Fig. (2) depicts the progress of the new MTTBRB-CG algorithm relative to the calculated (NOI) of the test functions during the implementation of Dolan-More method with (n=100 & n=1000.

Table 1 :
The proposed method's percentage performance between the algorithms

Table 2 :
The proposed method's percentage performance between the algorithms