Talk:Iterative method
The contents of the Methods of successive approximation page were merged into Iterative method on 12 September 2024. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
History section is too shallow
[edit]First, it ignores work by Gauss predecessors, including Newton. Second, it ignores the real world drivers (astronomy, business, and military issues) behind the effort to establish quicker tabulation methods. --70.112.146.196 17:30, 8 July 2006 (UTC)
- I stumbled upon this article and was surprised Newton's method was not even mentioned indeed. Isn't it a good simple example of iterative method?--Grondilu (talk) 16:32, 8 March 2014 (UTC)
Cost of moving, cost of evaluation?
[edit]In iterative methods, there may be cases where the cost of moving from one point to the next varies with location (e.g., it might cost more to move from (1,0,0,0) to (4,3,2,1) than to (2,0,0,0)); there also may be cases where the cost of evaluation is location-dependent (e.g., evaluating f(1000,1000) may cost less than evaluating f(42,37)). In such cases, the optimal "next guess" would be different than if these constraints did not exist. Is there literature on this sort of thing? —Ben FrantzDale 04:53, 27 April 2007 (UTC)
- Can you give an example of a realistic problem where evaluating a cost function at a point requires significantly more computation than evaluating the same cost function at a different point? Oleg Alexandrov (talk) 06:22, 27 April 2007 (UTC)
- One thing I'm thinking about is mesh adaptation for finite element analysis. Refining the mesh lets you explore a higher-dimensional solution space but at a computational cost. Another example is minimizing the energy of a bunch of atoms. Moving a bunch of atoms is O(N). Suppose you are doing steepest decent. If you knew that the force was almost zero for most atoms and only significant for n atoms, you could move from one configuration to another in O(n) time, putting off the O(N) step until it becomes necessary. I have other crazy ideas in mind as well.
- A less-abstract example came up in real life. I was hiking with my brother in the hills near San Diego (photo). We were trying to get to the top of a hill. There was no trail, so we were bushwhacking. From a distance, it wasn't possible to tell if bushes were small and could be walked over or were head-height and tangled. We tried to get to the top as quickly as possible, so we went for the shortest route, but as we went along, the bushes slowed us down, so we followed the path that seemed locally to be optimal in terms of time and in terms of scratches from branches. If we had known a priori that basically the top of the hill was densely covered in one convex patch, we could have used that knowledge to first find the edge of the dense vegetation, then traverse that edge to find the highest easily-walkable point, then go from there to the top.
- Does that make sense? Are there mathematical tools to discuss such things? —Ben FrantzDale 07:34, 27 April 2007 (UTC)
- Interesting question, Ben FrantzDale.
- The algorithms used in motion planning and pathfinding are more relevant to physically climbing a physical hill than mathematical hill climbing methods that metaphorically climb an imaginary, abstract hill.
- Many people are trying to collect enough shortcut heuristics, some of them similar to the one you mentioned, such that applying all of them helps solve the protein folding problem to give results that are "accurate enough" to be useful in a reasonable amount of CPU time.
- Similarly, n-body simulation#Calculation optimizations there are shortcut heuristics that give "accurate enough" simulations in less CPU time; and a few characteristics that can be calculated directly without doing any simulation.
- Is there a general field of study that covers all of these very specific examples? --DavidCary (talk) 15:13, 27 March 2015 (UTC)
Iterative method for division algorithms and its generalization to the LSE solving
[edit]Standard division algorithms – restoring and non-restoring procedures - iterative produce one digit of the final quotient per iteration. The division methods are all based on the form X = b / A, where • X = Quotient • b = Dividend • A = Divisor Division methods are all based on a standard recurrence equation:
Wk+1 =2Wk - Aqk,
where: • Wk = the partial remainder of the division, • qk = the digit of the quotient X in k-th position, • A = the divisor.
Both restoring and non-restoring divisions are based on the analysis of the sign of the partial remainder Wk. If Wk >= 0, then the divisor A is subtracted from the partial remainder Wk , if Wk < 0 then the divisor A is added to the partial remainder Wk . Both methodes doesn’t take into account the value of the partial remainder Wk , only its sign.
Taking into account the value of the most significant digit of the partial remainder Wk , can accelerate the division process. If the most significant digit of the Wk is “1”, then we choose according to the sign of Wk - addition or subtraction the divisor A to the Wk and write -1 or +1 to the correspondingly position of the quotient X as in non-restoring division procedure.
But if the most significant digit of the Wk is “0”, then we skip this iteration and write “0” to the correspondingly position of the quotient X . So we get the quotient in the ternary system with the set of digits: {+1, 0, -1}, which is converted then very easy into traditional binary system.
Evidently this approach can be generalized to the vectors and matrices. For example if A - m-th order matrix, b – the right part of the linear system of equations, X – the vector of the unknowns, we have: X = b A-1 , or: AX - b = 0 For the k-th iteration: AXk - b = Wk, where:
Wk - value of residual vector for k-th iteration; Xk - value of result vector of unknowns on k-th iteration.
vladimir@baykov.de — Preceding unsigned comment added by Vladimir Baykov (talk • contribs) 14:52, 12 March 2011 (UTC)
Proposed merge of Methods of successive approximation into Iterative method
[edit]As per numerical analysis, "iterative methods form successive approximations that converge to the exact solution". fgnievinski (talk) 02:56, 21 December 2023 (UTC) fgnievinski (talk) 02:56, 21 December 2023 (UTC)
- Merger complete. Klbrain (talk) 18:51, 12 September 2024 (UTC)
IDR(stab)
[edit]It is interesting that someone deleted the articles on IDRstab and the mentioning of IDR, which is the fastest Krylov method for non-symmetric systems. Makes no sense, especially since that is the first go-to method for everyone knowledgeable enough, which this article ironically prevents. 31.17.92.45 (talk) 17:34, 25 November 2024 (UTC)