 Research
 Open Access
 Published:
Stringaveraging methods for best approximation to common fixed point sets of operators: the finite and infinite cases
Fixed Point Theory and Algorithms for Sciences and Engineering volume 2021, Article number: 9 (2021)
Abstract
Stringaveraging is an algorithmic structure used when handling a family of operators in situations where the algorithm in hand requires to employ the operators in a specific order. Sequential orderings are well known, and a simultaneous order means that all operators are used simultaneously (in parallel). Stringaveraging allows to use strings of indices, constructed by subsets of the index set of all operators, to apply the operators along these strings, and then to combine their endpoints in some agreed manner to yield the next iterate of the algorithm. Stringaveraging methods were discussed and used for solving the common fixed point problem or its important special case of the convex feasibility problem. In this paper we propose and investigate stringaveraging methods for the problem of best approximation to the common fixed point set of a family of operators. This problem involves finding a point in the common fixed point set of a family of operators that is closest to a given point, called an anchor point, in contrast with the common fixed point problem that seeks any point in the common fixed point set.
We construct stringaveraging methods for solving the best approximation problem to the common fixed points set of either finite or infinite families of firmly nonexpansive operators in a real Hilbert space. We show that the simultaneous Halpern–Lions–Wittman–Bauschke algorithm, the Halpern–Wittman algorithm, and the Combettes algorithm, which were not labeled as stringaveraging methods, are actually special cases of these methods. Some of our stringaveraging methods are labeled as “static” because they use a fixed predetermined set of strings. Others are labeled as “quasidynamic” because they allow the choices of strings to vary, between iterations, in a specific manner and belong to a finite fixed predetermined set of applicable strings. For the problem of best approximation to the common fixed point set of a family of operators, the full dynamic case that would allow strings to unconditionally vary between iterations remains unsolved, although it exists and is validated in the literature for the convex feasibility problem where it is called “dynamic stringaveraging”.
Introduction
Stringaveraging algorithmic structures are used for handling a family of operators in situations where the algorithm needs to employ the operators in a specific order. Stringaveraging allows to use strings of indices taken from the index set of all operators, to apply the operators along these strings, and to combine their endpoints in some agreed manner to yield the next iterate of the algorithm.
If a point p of a set C is nearest to a given point x in space, known as the anchor, then p is a best approximation to x from C. In the case when C is the common fixed point set of a family of selfmapping operators, the problem of finding such p is known as the best approximation problem (BAP). In case the fixed point sets are all convex, this problem is a special case of two wellknown problems: The convex feasibility problem (CFP), which is to find a (any) point in the intersection of closed convex sets, and the common fixed point problem (CFPP), where the closed convex sets in CFP are the fixed point sets of operators of a given family. The CFP, the CFPP, and the BAP are widely studied and are useful in mathematics and various physical sciences (see, e.g., Bauschke and Borwein [6], Reich and Zalas [33], and Cegielski [13], to name but a few), including in the context of stringaveraging for the CFP and the CFPP (see, e.g., Censor and Zaslavski [19, 20], Bargetz, Reich, and Zalas [3] and [33]).
Nevertheless, stringaveraging algorithmic approaches for solving the BAP were, to the best of our knowledge, neither proposed nor investigated. Motivated by this, we devote our research presented here to developing stringaveraging algorithmic schemes for finding the best approximation to the common fixed points set of either a finite or an infinite family of firmly nonexpansive operators. Besides purely mathematical interest, see, e.g., Dye, Khamsi, and Reich [27] and the references therein, motivation for the infinite case can also come from practical realworld situations. This point is succinctly made in the introduction of the recent paper of Kong, Pajoohesh, and Herman [30]. Although they refer to the infinite CFP, their arguments can serve also to justify the infinite case for the BAP. The case of infinitely many sets is useful in applications where there is a potential infinity of samples or measurements, each one of which gives rise to a convex set that contains the point we wish to recover, see, for example, Blat and Hero [10].
Contribution and structure of the paper
Our stringaveraging methods are applied to families of firmly nonexpansive operators, and they follow the principles of the original stringaveraging process suggested by Censor, Elfving, and Herman in [16]. Consider a set \(\mathcal{M}\) containing all pairs of the form \((\varOmega ,w)\), where Ω is a set of finite strings of indices, by which the string operators are formed, and w is a function that attaches to every string \(t\in \varOmega \) a positive real weight \(w(t)\) such that the sum of weights equals 1.
We construct what we call a “static” stringaveraging method in which a fixed predetermined single pair \((\varOmega ,w)\in \mathcal{M}\) is used throughout the iterative process. This approach solves the BAP for a common fixed points set of either a finite or an infinite family of firmly nonexpansive operators, and we show that the simultaneous Halpern–Lions–Wittman–Bauschke algorithm (see, e.g., in Censor [14, Algorithm 5]), the Halpern–Wittman algorithm (see, e.g., [6, Algorithm 4.1]), and the Combettes algorithm [21], which were labeled as sequential and simultaneous algorithms, respectively, are special cases of our static stringaveraging methods.
We extend the “static” notion in the finite case to the situation where a finite number of pairs \((\varOmega ,w)\in \mathcal{M}\) can be used and call it a “quasi” dynamic stringaveraging method due to its resemblance to the dynamic stringaveraging scheme (see [19]). This is done by ordering a finite predefined set of pairs \((\varOmega ,w)\in \mathcal{M}\) and forming a finite family of operators of the form \(\sum_{t\in \Omega }w(t)T[t](x)\) for every such \((\varOmega ,w)\), where \(T[t]\) are string operators. These are, in turn, used in a cyclic manner.
With the aid of a construction similar to the quasidynamic stringaveraging algorithm, we propose a simultaneous stringaveraging method.
We focus here on Halpern’s algorithm and the Halpern–Lions–Wittman–Bauschke algorithm. There are, however, several other iterative processes for solving the BAP, which are not treated here, but can possibly be also extended via the stringaveraging algorithmic concept, e.g., Dykstra’s algorithm, see [7, Sect. 30.2], and Haugazeau’s method, see [7, Sect. 30.3], where further references can be found.
The results presented here for the infinite case complement our earlier work in Aleyner and Censor [1] where a sequential algorithm for solving the BAP to the common fixed points set of a semigroup of nonexpansive operators in Hilbert space was studied. Only a sequential iterative process was investigated there, but the framework was more general due to the kind of operators (nonexpansive) and the size of the pool of operators (a semigroup, not limited to the countable case).
This is a theoretical work in the spirit of theoretical developments in fixed point theory presented in many of the earlier referenced papers. The stringaveraging approach is actually not a single algorithm but an “algorithmic scheme” so that every individual choice of lengths and assignments of strings will give rise to a different algorithm. In this way our stringaveraging algorithmic scheme generalizes earlier algorithms that become special cases of it. It is not practical to conduct a numerical experiment without having a specific scientific or realworld problem in hand, but it is expected that researchers who need to solve the best approximation problem to common fixed point sets will find here valuable algorithmic information.
The paper is structured as follows. After preliminaries in Sect. 3, the work is divided into two main parts: The first part where the given family of firmly nonexpansive operators is finite, in Sects. 5 and 6, and the second part where the given family of firmly nonexpansive operators is a countable family, in Sect. 7. The static, the quasidynamic, and the simultaneous stringaveraging methods can be found in Sects. 5.1, 5.2, and 5.3, respectively. In Sect. 6, we show for which choices of pairs \((\varOmega ,w)\) the simultaneous version of the Halpern–Lions–Wittman–Bauschke algorithm and the Halpern–Wittman algorithm are special cases of our static stringaveraging approach. In Sect. 7.1, we propose our static stringaveraging method for solving the BAP in the infinite case, from which the wellknown Combettes algorithm follows.
Previous related works
An early approach, based on projection operators, is John von Neumann’s alternating projection method [34] for solving the BAP with two closed linear subspaces. It has been widely studied and generalized by many authors, see, e.g., Bauschke and Borwein [5], Deutsch [23], Kopecká and Reich [31], and Deutsch and Hundal [25]. For the general case of arbitrary convex sets, the Dykstra’s algorithm is a suitable modification of the alternating projections method to solve the BAP. This algorithm was first introduced by Dykstra in [28] for closed and convex cones in finitedimensional Euclidean spaces, and later extended by Boyle and Dykstra in [11] for closed and convex sets in a Hilbert space. Additional projection approaches can be found in Aragón Artacho and Campoy [2] and Bregman, Censor, Reich, and ZepkowitzMalachi [9].
For a recent bibliography of papers and monographs on projection methods, see Censor and Cegielski [15]. A wellknown method for solving the more general BAP is Halpern’s algorithm [29], whose strong convergence to the solution, under various sets of assumptions on the parameters, has been proved by several authors. The main contributions are due to Lions, to Wittman, and to Bauschke, see Bauschke’s paper [4]. Additional literature background of the BAP and related problems appear in the excellent literature review of [2], from which we adapted some of the above.
The literature on stringaveraging algorithms has also expanded since its first presentation in [16], and further related work has been published. In Crombez [22], a stringaveraging algorithm for solving the common fixed point problem for a finite family of paracontracting operators is proposed. Censor and Segal in [17] suggested a stringaveraging solution for the common fixed point problem for a finite family of sparse operators. In Censor and Tom [18], a study on the behavior of a stringaveraging algorithm for inconsistent convex feasibility problems is done.
A generalized notion of stringaveraging is the notion of dynamic stringaveraging, in which one is able to use in the iterative process pairs \((\varOmega ,w)\) from a predefined set, denoted by \(\mathcal{M}_{*}\subset \mathcal{M}\) (see, e.g., [20]). In every \((\varOmega ,w)\in \mathcal{M}_{*}\), the length of every \(t\in \varOmega \) is bounded and \(w(t)\) is bounded away from zero. An example of a dynamic stringaveraging algorithm can be found in [19]. That iterative process generates a convergent sequence whose limit is a point in the intersection of a finite family of closed convex sets, namely a solution to the CFP. Recently, Censor, Brooke, and Gibali in [12] were able to construct a dynamic stringaveraging method for solving the multipleoperator split common fixed point problem (see, e.g., [12, Problem 1]) for families of cutters in Hilbert spaces.
Preliminaries
Throughout our work we denote by H a real Hilbert space with the inner product \(\langle \cdot \,,\,\cdot \rangle \) and induced norm \(\Vert \cdot \Vert \), and by \(\mathbb{N}\) the set of all natural numbers including zero. Let \(D\subseteq H\) be a nonempty set, and let \(T:D\rightarrow H\) be an operator. A vector \(x\in D\) is a fixed point of T if it satisfies \(T(x)=x\). The set of all fixed points of T is denoted by \(\operatorname{Fix}(T):= \{ x\in D\mid T(x)=x \} \). If D is a closed convex set, then for every \(u\in H\) there exists a unique point \(y\in D\) such that
Such a point y is called the projection of u onto D and is denoted by \(P_{D}(u)\).
We now recall some definitions regarding various classes of operators.
Definition 3.1
Let D be a nonempty subset of H, and let \(T:D\rightarrow H\). Then T is:

(i)
Nonexpansive (NE) if
$$ \bigl\Vert T(x)T(y) \bigr\Vert \leq \Vert xy \Vert ,\quad \forall x,y \in D. $$(2) 
(ii)
Firmly nonexpansive (FNE) if
$$ \bigl\Vert T(x)T(y) \bigr\Vert ^{2}\leq \bigl\langle xy,T(x)T(y) \bigr\rangle ,\quad \forall x,y\in D. $$(3) 
(iii)
Quasinonexpansive (QNE) if
$$ \bigl\Vert T(x)y \bigr\Vert \leq \Vert xy \Vert ,\quad \forall x\in D, y\in \operatorname{Fix}(T). $$(4) 
(iv)
Strictly quasinonexpansive (sQNE) if
$$ \bigl\Vert T(x)y \bigr\Vert < \Vert xy \Vert , \quad \forall x\in D \setminus \operatorname{Fix}(T), y\in \operatorname{Fix}(T). $$(5) 
(v)
Cstrictly quasinonexpansive (CsQNE) if T is quasinonexpansive and
$$ \bigl\Vert T(x)y \bigr\Vert < \Vert xy \Vert ,\quad \forall x\in D \setminus \operatorname{Fix}(T), y\in C, $$(6)where \(C\neq \emptyset \) and \(C\subseteq \operatorname{Fix}(T)\).
A useful fact which can be found, e.g., in [13, Proposition 2.1.11] is that the fixed point set of a nonexpansive operator is a closed convex set. Since an intersection of any collection of closed convex sets is a closed convex set (see, e.g., Lemma 1.13 and Example 2.3 in Deutsch [24]), it follows that the set of common fixed point sets of a family of nonexpansive operators of any cardinality is a closed convex set and, thus, the projection of any given point u onto this set is well defined provided that the intersection is nonempty.
All our stringaveraging methods use sequences of real numbers called steering sequences.
Definition 3.2
(Steering sequences)
A real sequence \((\lambda _{k})_{k\in \mathbb{N}}\) is called a steering sequence if it has the following properties:
Observe that although \(\lambda _{k}\in [0,1]\) the definition rules out the option of choosing all \(\lambda _{k}\) equal to zero or all equal to one because of contradictions with other properties. Infinitely many zeros are possible only if the remaining nonzero elements obey all properties. The third property in (9) was introduced by Wittmann, see, e.g., the recent review paper of López, MartinMárquez, and Xu [32].
Lemma 3.3 below is composed of several known claims which will be used in the sequel. We supply pointers to the proof of the lemma for completeness.
Lemma 3.3
Let D be a nonempty closed convex subset of H, and let \(T:D\rightarrow D\) be FNE. Then

(i)
T is NE;

(ii)
If \(\operatorname{Fix}(T)\neq \emptyset \) then T is QNE;

(iii)
If \(\operatorname{Fix}(T)\neq \emptyset \) then T is sQNE.
Proof
For (i) and (ii) see [13, Theorem 2.2.4] and [13, Lemma 2.1.20], respectively. (iii) follows from the statement on page 70 of [7]. □
Stringaveraging methods for best approximation to the common fixed points set of a family of firmly nonexpansive operators: general
In our present work we consider the best approximation problem with respect to the common fixed points set of a family of FNEs. Our overall aim is to develop and investigate stringaveraging algorithms for this problem. In the stringaveraging algorithmic scheme, one constructs from a given family of operators a family of socalled stringoperators which are certain compositions of some of the operators from the given family.
According to these, the stringaveraging algorithm proceeds in its iterative process. We show that such stringaveraging algorithmic schemes converge to the projection of a given point (commonly called the “anchor”) to the common fixed point set of the given family.
We are able to ensure convergence of our stringaveraging methods by demanding that the operators of the given family be FNEs. There are wellknown links between the classes of FNEs, NEs, sQNEs, and QNEs defined above that help us in our analysis. To take advantage of these links, we use Corollary 4.50 and Proposition 4.47 in Bauschke and Combettes’ book [7], which are proved for a finite family of sQNEs and a finite family of QNEs, respectively. We extend the usage of [7, Proposition 4.47] to the countable case in order to determine, in Sect. 7, the point to which our stringaveraging algorithm converges to.
Stringaveraging methods for best approximation to the common fixed points set of a family of firmly nonexpansive operators: the finite case
In our work we develop stringaveraging algorithms for two distinct situations. One is the finite case wherein the family of given FNEs is finite. The other is when the family of given FNEs is countably infinite. In this section we consider the finite case. We start by defining the terms which we use throughout this section.
Definition 5.1
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of selfmapping operators on D. An index vector is a vector of the form \(t=(t_{1},t_{2},\ldots ,t_{p})\) such that \(t_{\ell }\in \{1,2,\ldots ,m\}\) for all \(\ell \in \{1,2,\ldots ,p\}\). For a given index vector \(t=(t_{1},t_{2},\ldots ,t_{q})\), we denote its length (i.e., the number of its components) by \(\gamma (t)=q\) and define the operator \(T[t]\) as the composition of the operators \(T_{i}\) whose indices appear in the index vector t, namely
and call it a string operator. A finite set Ω of index vectors is called fit if, for each \(i\in \{ 1,2,\ldots ,m \} \), there exists a vector \(t=(t_{1},t_{2},\ldots ,t_{p})\in \varOmega \) such that \(t_{\ell }=i\) for some \(\ell \in \{ 1,2,\ldots ,p \} \). As in [19], we denote by \(\mathcal{M}\) the collection of all pairs \((\varOmega ,w)\), where Ω is a fit finite set of index vectors and \(w:\Omega \rightarrow (0,1]\) is such that \(\sum_{t\in \varOmega }w(t)=1\).
As mentioned above, [7, Corollary 4.50] and [7, Proposition 4.47] are cornerstones to our proofs of convergence. The following are slightly rephrased versions of them, respectively, adapted to our notations and needs.
Proposition 5.2
Let D be a nonempty subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self sQNEs on D such that \(\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \), and set \(T=T_{1}T_{2}\cdots T_{m}\). Then T is sQNE and \(\operatorname{Fix}(T)=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\).
Proposition 5.3
Let D be a nonempty subset of H, let \((T_{i})_{i=1}^{m}\) be a finite family of self QNEs on D such that \(\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \), and let \((w_{i})_{i=1}^{m}\) be a sequence of strictly positive real numbers such that \(\sum_{i=1}^{m}w_{i}=1\). Then \(\operatorname{Fix}(\sum_{i=1}^{m}w_{i}T_{i})=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\).
The following lemma combines all of the above into a useful auxiliary tool which is used repeatedly. We consider the operator \(T:=\sum_{t\in \Omega }w(t)T[t]\) which is called a stringaveraging operator.
Lemma 5.4
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self FNEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \((\varOmega ,w)\in \mathcal{\mathcal{M}}\), and let \(T=\sum_{t\in \Omega }w(t)T[t]\). Then \(\operatorname{Fix}(T)=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\).
Proof
Note that \(F\neq \emptyset \) and (10) imply that, for every \(t\in \varOmega \) and for every \(x\in F\), \(T[t](x)=x\) (since \(T[t]\) is a composition). Hence,
Therefore,
We want to use Proposition 5.3 for the family \((T[t])_{t\in \varOmega }\). Since every FNE is NE (see Lemma 3.3(i)), for every \(t\in \varOmega \), we apply [13, Lemma 2.1.12(ii)] to the family \((T_{t_{\ell }})_{\ell =1}^{q}\) and conclude that the string operator \(T[t]=T_{t_{q}}T_{t_{q1}}\cdots T_{t_{1}}\) is NE. Thus, from (11) and due to the fact that every NE with a fixed point is QNE, the family of operators \((T[t])_{t\in \varOmega }\) is a finite family of QNEs. This, together with (12), yields, according to Proposition 5.3, that
\(F\neq \emptyset \) implies that \(\operatorname{Fix}(T_{i})\neq \emptyset \), thus, by Lemma 3.3(iii), \((T_{i})_{i=1}^{m}\) is a family of sQNEs, and so, for every \(t\in \varOmega \), applying Proposition 5.2 with the family \((T_{t_{\ell }})_{\ell =1}^{q}\) yields that the string operator \(T[t]\) satisfies
From the fitness of Ω and from (14), we get
and, overall, from (13)
□
We are now ready to propose the new stringaveraging methods for solving the best approximation problem and prove their convergence.
The static stringaveraging method for the finite case
First we discuss stringaveraging methods in which a single pair \((\varOmega ,w)\in \mathcal{\mathcal{\mathcal{M}}}\) is picked at the outset and kept fixed throughout the iterative process. Such stringaveraging methods will be termed “static stringaveraging methods”. We will make use of the convergence theorem in [29]. Halpern’s algorithm is a sequential algorithm which generates a sequence via the iterative process
where \(S:D\rightarrow D\) is NE with \(\operatorname{Fix}(S)\neq \emptyset \), the anchor point \(u\in D\) is given and fixed, the initialization \(x^{0}\in D\) is an arbitrary point, and the sequence \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence as in Definition 3.2. Its proof of convergence can be found also in [7, Theorem 30.1]. We present a slightly rephrased version of this theorem without a proof.
Theorem 5.5
Let D be a nonempty closed convex subset of H, and let S be a self NE on D such that \(\operatorname{Fix}(S)\neq \emptyset \). Let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence, and let \(u,x^{0}\in D\). Then any sequence generated by (17) converges strongly to \(P_{\operatorname{Fix}(S)}(u)\).
Our new static stringaveraging algorithm for a finite family of FNEs is as follows.
Algorithm 1
(The static stringaveraging algorithm for solving the best approximation problem in the finite case)
Initialization: Choose a single pair \((\varOmega ,w)\in \mathcal{M}\) and arbitrary \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \(w(t)\) and \(T[t]\) are as in Definition 5.1.
The convergence proof of Algorithm 1 follows.
Theorem 5.6
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self FNEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \((\varOmega ,w)\in \mathcal{\mathcal{\mathcal{M}}}\) be fixed, let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence, and let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 1, converges strongly to \(P_{F}(u)\).
Proof
For \((T_{i})_{i=1}^{m}\) and \((\varOmega ,w)\in \mathcal{\mathcal{M}}\) consider the family of operators \((T[t])_{t\in \varOmega }\) and define the stringaveraging operator \(T:=\sum_{t\in \varOmega }w(t)T [t ]\). We show first that the operator T is NE and that \(\operatorname{Fix}(T)=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\). From the proof of Lemma 3.3 we conclude that T is a convex combination of NEs, and thus, since a convex combination of NEs is a NE (see, e.g., [13, Lemma 2.1.12(i)]), T is NE. Moreover, \(\operatorname{Fix}(T)\) is not empty since it contains F. Applying Halpern’s Theorem 5.5 with T in the role of S, any sequence \((x^{k})_{k\in \mathbb{N}}\) generated by Algorithm 1 converges strongly to \(P_{\operatorname{Fix}(T)}(u)\). Now, applying Lemma 5.4 on \((T_{i})_{i=1}^{m}\) together with \((\varOmega ,w)\) results in
and, therefore,
□
This concludes our treatment of the static stringaveraging algorithm for solving the best approximation problem in the finite case. We make the following remark about it.
Remark 5.7
Theorem 5.6 is related to two important results that appear in [7]. It generalizes Corollaries 30.2 and 30.3 in that book from the algorithmic structural point of view, because the algorithms there are fullysimultaneous and fullysequential, respectively. These two algorithmic options are special cases of the static stringaveraging algorithm that are obtained either by choosing strings of length one with every index \(i=1,2,\ldots ,m\) appearing exactly in one string or by choosing to use a single string that includes all indices \(i=1,2,\ldots ,m\), respectively. However, our Theorem 5.6 cannot be considered as a generalization of those corollaries because the corollaries deal with NEs while we restrict our analysis of the stringaveraging algorithmic structure to only FNEs. The question whether or not our Theorem 5.6 can be proven for NEs remains open.
Next we expand our results to a nonstatic case.
The quasidynamic stringaveraging method for the finite case
The key adjustment which we did in the construction of Algorithm 1 in order to prove its convergence with the aid of Halpern’s algorithm was the repeated use of the same single fixed pair \((\varOmega ,w)\) in all iterations \(k\geq 1\). This reuse of a fixed \((\varOmega ,w)\) throughout the whole iterative process of Algorithm 1 is a special case of a more general method, mentioned briefly already in Sect. 2, called the dynamic stringaveraging method. The dynamic stringaveraging algorithmic scheme allows to pick and use in every step of the iterative process any pair \((\varOmega ,w)\) from a predefined set \(\mathcal{M}_{*}\subset \mathcal{M}\). The set \(\mathcal{M}_{*}\) was defined in [19, Equation (21)] as follows.
Definition 5.8
Fix a number \(\Delta \in (0, 1/m)\) and an integer \(\bar{q}\geq m\), and denote by \(\mathcal{M}_{*}\equiv \mathcal{M}_{*}(\Delta ,\bar{q})\) the set of all \((\varOmega ,w)\in \mathcal{M}\) such that the lengths of the strings are bounded and the weights are bounded away from zero, namely
The set \(\mathcal{M}_{*}\) is an infinite subset of \(\mathcal{M}\), but in our quasidynamic stringaveraging method proposed below we must confine ourselves to a finite subset of \(\mathcal{\mathcal{M}}\). So, instead of choosing the pairs \((\varOmega ,w)\) from \(\mathcal{M}_{*}\), we choose them from a finitecardinality subset, denoted by \(\mathcal{M}'\subset \mathcal{M}\), and use them in a cyclic manner. The finiteness of \(\mathcal{M}'\) guarantees that it is actually a subset of \(\mathcal{M}_{*}\). As one can tell, such an algorithm is indeed not as “dynamic” as Algorithm 6 of [19] or Algorithm 3 of [20]; nevertheless, it is not static as Algorithm 1. Therefore, we name it quasidynamic stringaveraging method.
Next, we explain how the construction of our quasidynamic stringaveraging algorithm is done. Let us construct a sequence that is an ordered version of \(\mathcal{M}'\) as follows: Let \(\sigma :\mathcal{M}'\rightarrow \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \) be a onetoone correspondence and denote, for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \), \(S_{r}:=(\varOmega _{r},w_{r})\). Let \((S_{r} )_{r=1}^{ \vert \mathcal{M}' \vert }\) be a sequence of all \((\varOmega ,w)\in \mathcal{M}'\), sorted by \(\sigma ((\varOmega ,w))\). Namely, \(S_{1}\) is the pair \((\varOmega ,w)\) such that \(\sigma ((\varOmega ,w))=1\), \(S_{2}\) is the pair \((\varOmega ,w)\) such that \(\sigma ((\varOmega ,w))=2\), and so forth until \(S_{ \vert \mathcal{M}' \vert }\) is the pair \((\varOmega ,w)\) such that \(\sigma ((\varOmega ,w))= \vert \mathcal{M}' \vert \).
The following is our quasidynamic stringaveraging algorithm.
Algorithm 2
(The quasidynamic stringaveraging algorithm for solving the best approximation problem in the finite case)
Initialization: \(x^{0}\in D\) is arbitrary.
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, \(j(k)=k \bmod \vert \mathcal{M}' \vert +1\) for all \(k\geq 0\) is a cyclic control sequence (see, e.g., [17, Definition 4]), and \(\varOmega _{j(k)}\) and \(w_{j(k)}\) are the elements of the pair \(S_{j(k)}=(\varOmega _{j(k)},w_{j(k)})\), respectively.
To prove the convergence of Algorithm 2, we make use of the following theorem, which is a slightly rephrased version of Theorem 3.1 in [4].
Theorem 5.9
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self NEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Assume that
Let \(i(k)=k \bmod m+1\) be a cyclic control sequence, let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence, and let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\) generated by
converges strongly to \(P_{F}(u)\).
With the aid of the sequence \((S_{r})_{r=1}^{ \vert \mathcal{M}' \vert }\) and Theorem 5.9, we present in the next theorem a proof of convergence of Algorithm 2.
Theorem 5.10
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self FNEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \(\mathcal{M}'\) be a finite subset of \(\mathcal{\mathcal{\mathcal{M}}}\), let \(j(k)=k \bmod \vert \mathcal{M}' \vert +1\) for all \(k\geq 0\) be a cyclic control sequence, and let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence. Let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 2, converges strongly to \(P_{F}(u)\).
Proof
Define the finite family of operators \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) by \(T_{S_{r}}:=\sum_{t\in \varOmega _{r}}w_{r}(t)T[t]\), where for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \), \(S_{r}:=(\varOmega _{r},w_{r})\). We first show that for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \) the operator \(T_{S_{r}}\) is both NE and sQNE. By similar arguments to those made for T in the proof of Theorem 5.6, it follows that \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) is a family of NEs. From the proof of Lemma 5.4 we deduce that \((T[t])_{t\in \varOmega _{r}}\) is a family of QNEs such that
for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \). Therefore, we are able to apply Proposition 5.3 to the family \((T[t])_{t\in \varOmega _{r}}\) (in place of the family \((T_{i})_{i=1}^{m}\)) for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \), which results in
Therefore, by (25),
Next we show that \(T_{S_{r}}\) is sQNE. According to Lemma 3.3(iii), each \(T_{i}\) is sQNE, and hence, similar to the analysis made in the proof of Lemma 5.4, with the aid of [7, Corollary 4.50] instead of [13, Lemma 2.1.12(ii)], \(T[t]\) is sQNE for every \(t\in \varOmega _{r}\) and for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \). In particular, the latter shows that, for given r and any \(t\in \varOmega _{r}\), we have, due to (26), that for all \(x\in D\setminus \operatorname{Fix}(T[t])\) and \(y\in \operatorname{Fix}(T_{S_{r}})\),
Thus, \(T[t]\) is CsQNE with \(C:=\operatorname{Fix}(T_{S_{r}})\). Consequently, from (26), (27), and [13, Theorem 2.1.26(i)], \(T_{S_{r}}\) is CsQNE for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \), and so, by the definition of C and [13, p. 47], \(T_{S_{r}}\) is sQNE for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \).
Therefore, since \(\emptyset \neq F\subseteq \bigcap_{r=1}^{ \vert \mathcal{M}' \vert }\operatorname{Fix}(T_{S_{r}})\), we can apply Proposition 5.2 to the family \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\), which yields that (23) holds for \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\). Hence, since the family \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) is a family of NEs and it satisfies (23), we let the role of the family \((T_{i})_{i=1}^{m}\) in Theorem 5.9 to be played by the family \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\). Moreover, by taking the sequence \((i(k))_{k\in \mathbb{N}}\) in Theorem 5.9 to be \((j(k))_{k\in \mathbb{N}}\), (22) turns out to be a special case of (24), and we deduce that any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by (22), converges strongly to \(P_{\bigcap_{r=1}^{ \vert \mathcal{M}' \vert }\operatorname{Fix}(T_{S_{r}})}(u)\).
Now, (26), (14), and the fitness of Ω (recall Definition 5.1) imply that
and, in conclusion,
□
This concludes our treatment of the quasidynamic stringaveraging algorithm for solving the best approximation problem in the finite case.
Simultaneous stringaveraging methods
One possibility to define a simultaneous stringaveraging method was discussed in Remark 5.7 above and termed “fully simultaneous”. By using a family of stringaveraging operators \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\), as in Sect. 5.2, and employing an additional weight sequence \((\hat{w_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) of strictly positive real numbers such that \(\sum_{r=1}^{ \vert \mathcal{M}' \vert }\hat{w_{r}}=1\), we can construct yet another algorithm of a simultaneous nature for solving the best approximation problem to common fixed point sets of operators in the finite case. This algorithm convexly combines via \((\hat{w_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) the points \(T_{S_{r}}(x^{k})\), which amounts to stringaveraging the endpoints of the string operators \(T[t]\) for every \(t\in \varOmega _{r}\) and for every \(r\in \{ 1,2,\ldots , \vert \mathcal{M}' \vert \} \). The scheme is as follows.
Algorithm 3
(The simultaneous stringaveraging algorithm for solving the best approximation problem in the finite case)
Initialization: \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \((\hat{w_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) are userchosen strictly positive real numbers such that \(\sum_{r=1}^{ \vert \mathcal{M}' \vert }\hat{w_{r}}=1\).
It is possible to obtain the convergence of Algorithm 3 from Theorem 5.6 about our static stringaveraging Algorithm 1. But this would limit the scope to a family \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) of FNEs only. Therefore, we derive the convergence of Algorithm 3 from Corollary 30.2 in [7] which holds for NEs. We do this next. Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self NEs on D.
Algorithm 4
(The fullysimultaneous algorithm for the best approximation problem in Corollary 30.2 of [7])
Initialization: \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \((w_{i})_{i=1}^{m}\) is a sequence of userchosen strictly positive numbers such that \(\sum_{i=1}^{m}w_{i}=1\).
A slightly rephrased version of Corollary 30.2 of [7] is as follows.
Theorem 5.11
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self NEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \((w_{i})_{i=1}^{m}\) be a sequence of strictly positive real numbers such that \(\sum_{i\in I}w_{i}=1\), and let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence. Let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 4, converges strongly to \(P_{F}(u)\).
Algorithm 3 is now a special case of Algorithm 4 and its convergence follows from the above theorem.
Theorem 5.12
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i=1}^{m}\) be a finite family of self FNEs on D such that \(F:=\bigcap_{i=1}^{m}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) be as in Theorem 5.10, let \((\hat{w_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) be a sequence of strictly positive real numbers such that \(\sum_{r=1}^{ \vert \mathcal{M}' \vert }\hat{w_{r}}=1\), and let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence. Let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 3, converges strongly to \(P_{F}(u)\).
Proof
In the proof of Theorem 5.10 we showed that the family \((T_{S_{r}})_{r=1}^{ \vert \mathcal{M}' \vert }\) is a family of NEs with a nonempty common fixed points set, and so, Theorem 5.11 implies that any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 3, converges strongly to \(P_{\bigcap_{r=1}^{ \vert \mathcal{M}' \vert }\operatorname{Fix}(T_{S_{r}})}(u)\). Now, from (29), we conclude that
□
Stringaveraging with orthogonal projections for the best approximation to a finite family of closed convex sets
In this section we specialize our results on stringaveraging methods for solving the best approximation problem to orthogonal projections since they are known to be FNEs (see, e.g., [6, Facts 1.5(i)]). We start by looking at the static stringaveraging method with orthogonal projections, which is a special case of our Algorithm 1. We show that the simultaneous version of the Halpern–Lions–Wittman–Bauschke (HLWB) algorithm, used in [14, Algorithm 5], and the sequential Halpern–Wittman algorithm (see, e.g., Bauschke and Koch [8, Algorithm 4.1]) are special cases of the stringaveraging methods.
The static stringaveraging method for a finite family of closed convex sets
We write down formally the static stringaveraging method for a finite family of orthogonal projections in order to make sure that our string operators in such a case remain well defined. Let \(D\subseteq H\) be a subset of H, and let \((C_{i})_{i=1}^{m}\) be a finite family of closed convex sets \(C_{i}\subseteq D\) for every \(i\in \{ 1,2,\ldots , m \} \). We denote by \(P_{C_{i}}:D\rightarrow C_{i}\) the orthogonal projection onto the closed convex set \(C_{i}\). We now present the static stringaveraging method for a finite family of closed convex sets.
Algorithm 5
(The static stringaveraging algorithm for the best approximation to a finite family of closed convex sets)
Initialization: Choose a single pair \((\varOmega ,w)\in \mathcal{M}\) and arbitrary \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \(w(t)\) and \(P[t]\) are as in Definition 5.1 with \((T_{i})_{i=1}^{m}=(P_{C_{i}})_{i=1}^{m}\) and \(T[t]=P[t]\).
The proof of convergence of Algorithm 5 follows.
Theorem 6.1
Let D be a nonempty closed convex subset of H, and let \((C_{i})_{i=1}^{m}\) be a finite family of closed convex sets \(C_{i}\subseteq D\) for every \(i\in \{ 1,2,\ldots , m \} \) such that \(C:=\bigcap_{i=1}^{m}C_{i}\neq \emptyset \). Let \((\varOmega ,w)\in \mathcal{M}\), let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence, and let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 5, converges strongly to \(P_{C}(u)\).
Proof
This is a straightforward consequence of Theorem 5.6. Set the finite family of FNEs in Theorem 5.6 to be \((P_{C_{i}})_{i=1}^{m}\). Then (34) turns out to be a special case of (17), and so, by Theorem 5.6, any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 5, converges strongly to \(P_{C}(u)\). □
The simultaneous Halpern–Lions–Wittman–Bauschke algorithm as a special case
In [21] a parallel version of Halpern’s algorithm leads to a simultaneous Halpern–Lions–Wittman–Bauschke (HLWB) algorithm for a countable family of FNEs, see also Deutsch and Yamada [26]. Here we present a simultaneous HLWB algorithm for the case of a finite family of closed convex sets. The convergence of this algorithm follows directly by choosing strings that are singletons such that each index \(i\in \{ 1,2,\ldots , m \} \) appears in one string. Thus, this algorithm is not only a consequence of the abovementioned work of Combettes but also a consequence of our work here. Again, let D be a nonempty closed convex subset of H, let \((C_{i})_{i=1}^{m}\) be a finite family of closed convex sets \(C_{i}\subseteq D\) for every \(i\in \{ 1,2,\ldots , m \} \) such that \(C:=\bigcap_{i=1}^{m}C_{i}\neq \emptyset \).
Algorithm 6
(The simultaneous HLWB algorithm with orthogonal projections for the best approximation to a finite family of closed convex sets)
Initialization: \(x^{0}=u\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \((w_{i})_{i=1}^{m}\) is a sequence of userchosen strictly positive real numbers such that \(\sum_{i\in I}w_{i}=1\).
The Halpern–Wittman algorithm as a special case
Following the results in [29], Wittman [35] showed the convergence of an algorithm that is presented in [8, Algorithm 4.1] and is named there the Halpern–Wittman algorithm. It is designed for a finite family of orthogonal projections and a specific steering sequence. This algorithm is presented below as Algorithm 7 and its convergence follows directly from the convergence of Algorithm 5 by putting all indices of \(i\in \{ 1,2,\ldots ,m \} \) into a single string. Again, let D be a nonempty closed convex subset of H, and let \((C_{i})_{i=1}^{m}\) be a finite family of closed convex sets \(C_{i}\subseteq D\) for every \(i\in \{ 1,2,\ldots , m \} \) such that \(C:=\bigcap_{i=1}^{m}C_{i}\neq \emptyset \).
Algorithm 7
(The Halpern–Wittman algorithm)
Initialization: \(x^{0}=u\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where u is the given anchor point.
Stringaveraging methods for best approximation to the common fixed points set of a family of firmly nonexpansive operators: the infinite case
In this section we propose a stringaveraging method for solving the best approximation problem to the common fixed point set of a countable family of FNEs \((T_{i})_{i\in I}\), where I is a countable set of positive integers. We use similar terms to the ones that were used in Sect. 5 for the finite case, extending Definition 5.1 to the countable case.
The following definition elaborates how this expansion is made.
Definition 7.1
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i\in I}\) be a countable family of self operators on D. An index vector is a vector of the form \(t=(t_{1},t_{2},\ldots ,t_{p})\) such that \(t_{\ell }\in I\) for all \(\ell \in \{ 1,2,\ldots ,p \} \). For a given index vector \(t=(t_{1},t_{2},\ldots ,t_{q})\), we denote its length (i.e., the number of its components) by \(\gamma (t)=q\) and define the operator \(T[t]\) as the (finite) composition of the operators \(T_{i}\) whose indices appear in the index vector t, namely
and call it a string operator. An infinite set Ω of index vectors is called fit if, for each \(i\in I\), there exists a vector \(t=(t_{1},t_{2},\ldots ,t_{p})\in \varOmega \) such that \(t_{\ell }=i\) for some \(\ell \in \{ 1,2,\ldots ,p \} \). Denote by \(\mathcal{M}\) the collection of all pairs \((\varOmega ,w)\), where Ω is a fit countable set of index vectors and \(w:\varOmega \rightarrow (0,1)\) is such that \(\sum_{t\in \Omega }w(t)=1\).
Observe that in Definition 5.1\(w:\Omega \rightarrow (0,1]\) is permitted, whereas here we must have \(w:\varOmega \rightarrow (0,1)\). This is due to the fact that in the infinite case here it is impossible to put all operators \((T_{i})_{i\in I}\) in a single string operator \(T[t]\).
The static stringaveraging method for the infinite case
Algorithm 1 handles a finite family of FNEs \((T_{i})_{i=1}^{m}\) and throughout its iterative process, a finite number of string operators \(T [t ]\) is used to construct a single stringaveraging operator \(T=\sum_{t\in \Omega }w(t)T [t ]\). In the infinite case we allow an infinite family \((T_{i})_{i\in I}\). In our extension to the infinite case Ω is countable (not finite as it was before), and we allow a countable number of string operators \(T [t ]\) and a chosen fixed (infinite dimensional) weight vector, making now the single stringaveraging operator \(T:=\sum_{t\in \Omega }w(t)T [t ]\) an infinite series. The fact that only a single T is used makes this a “static” stringaveraging method. Algorithm 8 below is our suggested method for this case.
Algorithm 8
(The static stringaveraging algorithm for solving the best approximation problem in the countable case)
Initialization: Choose a single pair \((\varOmega ,w)\in \mathcal{M}\) and arbitrary \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \(w(t)\) and \(T[t]\) are as in Definition 7.1.
Our aim is to use again Theorem 5.5 to prove the convergence of Algorithm 8 to the projection of the anchor point onto the common fixed point set of the initial given family of FNEs. Recall that, when we did so earlier for the finite case, we used Lemma 5.4, which in turn depended on Propositions 5.2 and 5.3. As we are now dealing with the countable case, we first extend these propositions to the countable case. Note that since the length of any index vector remains finite in our countable case, Proposition 5.2 remains applicable. Hence, we only slightly adjust Proposition 5.3 as follows and present an elementary proof of it.
Proposition 7.2
Let D be a nonempty subset of H, let \((T_{i})_{i\in I}\) be a countable family of self QNEs on D such that \(\bigcap_{i\in I}\operatorname{Fix}(T_{i})\neq \emptyset \), and let \((w_{i})_{i\in I}\) be a countable sequence of strictly positive real numbers such that \(\sum_{i\in I}w_{i}=1\). Then \(\operatorname{Fix}(\sum_{i\in I}w_{i}T_{i})=\bigcap_{i\in I}\operatorname{Fix}(T_{i})\).
Proof
First, let us show that \(\sum_{i\in I}w_{i}T_{i}\) is a welldefined operator. That is, for every \(x\in D\), we have to show that \(\sum_{i\in I}w_{i}T_{i}(x)\) converges as an infinite series. Fix \(x\in D\), let \(f\in \bigcap_{i\in I}\operatorname{Fix}(T_{i})\) and let \(T_{i}\in (T_{i})_{i\in I}\). First, we observe that, by the quasinonexpansivity of \(T_{i}\),
which shows that the series
converges. Define the sequence of partial sums of \(\sum_{i\in I}w_{i}T_{i}(x)\), i.e., for every \(n\in \mathbb{N}\), \(S_{n}:=\sum_{i=1}^{n}w_{i}T_{i}(x)\).
In order to apply the Cauchy criterion for series convergence, we use (39) to obtain
Since the series in (40) is convergent, it follows that
which shows that \((S_{n})_{n\in \mathbb{N}}\) is a Cauchy sequence, thus, from the completeness of H, \((S_{n})_{n\in \mathbb{N}}\) converges and, hence, \(\sum_{i\in I}w_{i}T_{i}(x)\) converges as well and \(\sum_{i\in I}w_{i}T_{i}(x)\) is well defined, as required.
The rest of the proof is similar to the proof of Corollary 4.48 in [7], which is quoted above in our Proposition 5.3. Set \(Q:=\sum_{i\in I}w_{i}T_{i}\). It is clear that \(\bigcap_{i\in I}\operatorname{Fix}(T_{i})\subseteq \operatorname{Fix}(Q)\). To handle the opposite inclusion, we first do the following calculation. Let \(y\in \bigcap_{i\in I}\operatorname{Fix}(T_{i})\). Then, for every \(i\in I\) and any \(x\in D\),
which, together with the quasinonexpansivity of \(T_{i}\), yields
Now let us take a point \(z\in \operatorname{Fix}(Q)\) and observe that it can be rewritten as \(z=\sum_{i\in I}w_{i}z\). Then from (44) it follows that
but only if the righthand side series \(\sum_{i\in I}w_{i}\Vert T_{i}(z)z\Vert ^{2}\) is convergent. To prove this last claim, we use (39) and the Cauchy–Schwarz inequality to reach
for any \(f\in \bigcap_{i\in I}\operatorname{Fix}(T_{i})\). Hence, by (46), and once more by (39), we have
The first infinite sum on the righthand side of (47) is a convergent series due to (39), the middle series on the righthand side of (46) is convergent because of (46), and the last series converges trivially, thus the series \(\sum_{i\in I}w_{i}\Vert T_{i}(z)z\Vert ^{2}\) is convergent and (45) guarantees that \(\sum_{i\in I}w_{i}\Vert T_{i}(z)z\Vert ^{2}=0\). Since \(w_{i}\neq 0\) for all \(i\in I\), we obtain that \(z\in \bigcap_{i\in I}\operatorname{Fix}(T_{i})\). □
We are now ready to extend Lemma 5.4 to the countable case.
Lemma 7.3
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i\in I}\) be a countable family of self FNEs on D such that \(F:=\bigcap_{i\in I}\operatorname{Fix}(T_{i})\neq \emptyset \). Let the single pair \((\varOmega ,w)\in \mathcal{\mathcal{M}}\), and let \(T:=\sum_{t\in \Omega }w(t)T[t]\) be the stringaveraging operator. Then \(\operatorname{Fix}(T)=\bigcap_{i\in I}\operatorname{Fix}(T_{i})\).
Proof
The proof proceeds in several steps. First, the family \((T[t])_{t\in \varOmega }\) is countable because Ω is, and since \(T[t]=T_{t_{q}}T_{t_{q1}}\cdots T_{t_{1}}\) is a finite composition, we use similar arguments to those in the proof of Lemma 5.4 to show that \((T[t])_{t\in \varOmega }\) is a family of QNEs with nonempty common fixed points set. Note that \(F\neq \emptyset \) and (37) imply that, for every \(t\in \varOmega \) and for every \(x\in F\), \(T[t](x)=x\). Hence,
Therefore,
Since every FNE is NE (see Lemma 3.3(i)), we apply, for every \(t\in \varOmega \), [13, Lemma 2.1.12(ii)] to the family \((T_{t_{\ell }})_{\ell =1}^{q}\) and conclude that the string operator \(T[t]:=T_{t_{q}}T_{t_{q1}}\cdots T_{t_{1}}\) is NE. Thus, from (48) which guarantees the nonemptiness of \(\operatorname{Fix}(T[t])\) and due to the fact that every NE with a fixed point is QNE, the family of operators \((T[t])_{t\in \varOmega }\) is a family of QNEs.
Secondly, by similar arguments to those that appear in the first part of the proof of Proposition 7.2, T is a welldefined operator and we can apply Proposition 7.2 with the family \((T[t])_{t\in \varOmega }\) instead of the family \((T_{i})_{i\in I}\) and with the family \((w(t))_{t\in \varOmega }\) instead of the family \((w_{i})_{i\in I}\) used there. Hence, we get
Now we apply Proposition 5.2 to the finite family \((T_{t_{\ell }})_{\ell =1}^{q}\). To do so, we note that, by Lemma 3.3(iii), all members of \((T_{i})_{i\in I}\) are sQNEs, and so, for every \(t\in \varOmega \), Proposition 5.2 yields that the string operator \(T[t]\) satisfies
Finally, from the fitness of Ω, from (50) and from (51), we obtain that
□
It is well known that a convex combination of NEs is a NE (see, e.g., [13, Lemma 2.1.12]). Since the string operators \(T[t]\) in Algorithm 8 appear inside an infinite series that resembles a convex combination, our next aim is to show that, under certain assumptions, the stringaveraging operator \(T=\sum_{t\in \Omega }w(t)T[t]\) is NE. The following lemma will show this.
Lemma 7.4
Let D be a nonempty subset of H, and let \((S_{i})_{i\in I}\) be a countable family of NEs such that, for every \(i\in I\), \(S_{i}:D\rightarrow H\) and \(\bigcap_{i\in I}\operatorname{Fix}(S_{i})\neq \emptyset \). Then \(S:=\sum_{i\in I}w_{i}S_{i}\), where \((w_{i})_{i\in I}\) is a countable sequence of strictly positive real numbers such that \(\sum_{i\in I}w_{i}=1\), is NE.
Proof
By using similar arguments to the ones that were used in Proposition 7.2 for T, we deduce that S is well defined. Now, let \(x,y\in D\). By the nonexpansivity of every \(S_{i}\) and since \(\sum_{i\in I}w_{i}=1\), it follows that
Thus, by Definition 3.1(i), S is NE. □
We are now ready to prove convergence of Algorithm 8.
Theorem 7.5
Let D be a nonempty closed convex subset of H, and let \((T_{i})_{i\in I}\) be a countable family of self FNEs on D such that \(F:=\bigcap_{i\in I}\operatorname{Fix}(T_{i})\neq \emptyset \). Let \((\Omega ,w)\in \mathcal{\mathcal{M}}\), let \((\lambda _{k})_{k\in \mathbb{N}}\) be a steering sequence, and let \(u,x^{0}\in D\). Then any sequence \((x^{k})_{k\in \mathbb{N}}\), generated by Algorithm 8, converges strongly to \(P_{F}(u)\).
Proof
For \((T_{i})_{i\in I}\) and \((\Omega ,w)\in \mathcal{\mathcal{M}}\), consider the family of operators \((T[t])_{t\in \varOmega }\) and define the stringaveraging operator \(T:=\sum_{t\in \varOmega }w(t)T [t ]\). By the proof of Lemma 5.4, we deduce that \((T[t])_{t\in \varOmega }\) is a countable family of NEs, and thus, T is a “convex combination” of countably many NEs. Since \(F\subseteq \bigcap_{t\in \varOmega }\operatorname{Fix}(T[t])\), it follows that \(\bigcap_{t\in \varOmega }\operatorname{Fix}(T[t])\neq \emptyset \) and, therefore, it can be shown by using similar arguments to the ones that were used in Proposition 7.2, that T is a welldefined operator. Moreover, Lemma 7.4 yields that T is NE, and since \(\operatorname{Fix}(T)\supseteq F\neq \emptyset \), Theorem 5.5 is applicable to T and implies that every sequence \((x^{k})_{k\in \mathbb{N}}\) generated by Algorithm 8 converges strongly to \(P_{\operatorname{Fix}(T)}(u)\). Now, by applying Lemma 7.3 on \((T_{i})_{i\in I}\) together with \((\varOmega ,w)\), we obtain that \(\operatorname{Fix}(T)=F\), thus \(x^{k}\rightarrow P_{F}(u)\). □
In [21] the following simultaneous algorithm for solving the best approximation problem to the common fixed point set of a countable family of FNEs with a nonempty common fixed points set is studied. This algorithm turns out to be a special case of our Algorithm 8 when the pair \((\varOmega ,w)\) is chosen \(\Omega := \{ (1),(2),(3),\ldots \} \) and for every \(i\in I\), \(w((i)):=w_{i}\). The convergence of this algorithm then follows from our Theorem 7.5.
Algorithm 9
(Combettes simultaneous algorithm)
Initialization: Choose \(x^{0}\in D\).
Iterative step: Given the current iterate \(x^{k}\), calculate the next iterate \(x^{k+1}\) by
where \((\lambda _{k})_{k\in \mathbb{N}}\) is a steering sequence, u is the given anchor point, and \((w_{i})_{i\in I}\) is a sequence of userchosen strictly positive real numbers such that \(\sum_{i\in I}w_{i}=1\).
Concluding comments
In this work we expand the class of stringaveraging methods by proposing new stringaveraging approaches for solving the BAP for the common fixed points set of either a finite or an infinite family of FNEs. These methods vary from the “static” format, in which one uses a single pair \((\varOmega ,w)\in \mathcal{\mathcal{M}}\), to quasidynamic and simultaneous formats, where more than a single pair is used. Nevertheless, these methods are obtained with string operators which are defined as finite compositions of operators, taken from the given family of operators, thus the question for infinitely many compositions, i.e., strings of infinite length, remains open, as well as the question whether our methods can be extended to the fully dynamic case.
Another question that remains open is whether the results of [1], where a sequential algorithm for solving the BAP to the common fixed points set of a semigroup of nonexpansive operators in Hilbert space was studied, can be extended to encompass stringaveraging algorithmic schemes.
Availability of data and materials
Not applicable.
References
 1.
Aleyner, A., Censor, Y.: Best approximation to common fixed points of a semigroup of nonexpansive operators. J. Nonlinear Convex Anal. 6, 137–151 (2005)
 2.
Artacho, F.J.A., Campoy, R.: A new projection method for finding the closest point in the intersection of convex sets. Comput. Optim. Appl. 69, 99–132 (2018)
 3.
Bargetz, C., Reich, S., Zalas, R.: Convergence properties of dynamic stringaveraging projection methods in the presence of perturbations. Numer. Algorithms 77, 185–209 (2018)
 4.
Bauschke, H.H.: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202, 150–159 (1996)
 5.
Bauschke, H.H., Borwein, J.M.: On the convergence of von Neumann’s alternating projection algorithm for two sets. SetValued Anal. 1, 185–212 (1993)
 6.
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
 7.
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Space, 2nd edn. Springer, Cham (2017)
 8.
Bauschke, H.H., Koch, V.R.: Projection methods: Swiss army knives for solving feasibility and best approximation problems with halfspaces. Contemp. Math. 636, 1–40 (2015)
 9.
Bregman, L.M., Censor, Y., Reich, S., ZepkowitzMalachi, Y.: Finding the projection of a point onto the intersection of convex sets via projections onto halfspaces. J. Approx. Theory 124, 194–218 (2003)
 10.
Blat, D., Hero, A.O. III: Energy based sensor network source localization via projection onto convex sets (POCS). IEEE Trans. Signal Process. 54, 3614–3619 (2006)
 11.
Boyle, J.P., Dykstra, R.L.: A method for finding projections onto the intersection of convex sets in Hilbert spaces. In: Dykstra, R., Robertson, T., Wright, F.T. (eds.) Advances in Order Restricted Statistical Inference. Lecture Notes in Statistics, vol. 37, pp. 28–47. Springer, New York (1986)
 12.
Brooke, M., Censor, Y., Gibali, A.: Dynamic stringaveraging CQmethods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning. Int. Trans. Oper. Res. (2020). https://doi.org/10.1111/itor.12929
 13.
Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Springer, Berlin (2012)
 14.
Censor, Y.: Computational acceleration of projection algorithms for the linear best approximation problem. Linear Algebra Appl. 416, 111–123 (2006)
 15.
Censor, Y., Cegielski, A.: Projection methods: an annotated bibliography of books and reviews. Optimization 64, 2343–2358 (2015)
 16.
Censor, Y., Elfving, T., Herman, G.T.: Averaging strings of sequential iterations for convex feasibility problems. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, pp. 101–114. Elsevier, Amsterdam (2001)
 17.
Censor, Y., Segal, A.: On the string averaging method for sparse common fixedpoint problems. Int. Trans. Oper. Res. 16, 481–494 (2009)
 18.
Censor, Y., Tom, E.: Convergence of stringaveraging projection schemes for inconsistent convex feasibility problems. Optim. Methods Softw. 18, 543–554 (2003)
 19.
Censor, Y., Zaslavski, A.J.: Convergence and perturbation resilience of dynamic stringaveraging projection methods. Comput. Optim. Appl. 54, 65–76 (2013)
 20.
Censor, Y., Zaslavski, A.J.: Stringaveraging projected subgradient methods for constrained minimization. Optim. Methods Softw. 29, 658–670 (2014)
 21.
Combettes, P.L.: Construction d’un point fixe commun à une famille de contractions fermes. C. R. Acad. Sci. Paris, Sér. A Math. 320, 1385–1390 (1995)
 22.
Crombez, G.: Finding common fixed points of strict paracontractions by averaging strings of sequential iterations. J. Nonlinear Convex Anal. 3, 345–351 (2002)
 23.
Deutsch, F.: Rate of convergence of the method of alternating projections. In: Brosowski, B., Deutsch, F. (eds.) Parametric Optimization and Approximation. International Series of Numerical Mathematics, vol. 72, pp. 96–107. Birkhäuser, Basel (1984)
 24.
Deutsch, F.: Best Approximation in Inner Product Spaces. Springer, New York (2001)
 25.
Deutsch, F., Hundal, H.: The rate of convergence for the method of alternating projections II. J. Math. Anal. Appl. 205, 381–405 (1997)
 26.
Deutsch, F., Yamada, I.: Minimizing certain convex functions over the intersection of the fixed point sets of nonexpansive mappings. Numer. Funct. Anal. Optim. 19, 33–56 (1998)
 27.
Dye, J., Khamsi, M.A., Reich, S.: Random products of contractions in Banach spaces. Trans. Am. Math. Soc. 325, 87–99 (1991)
 28.
Dykstra, R.L.: An algorithm for restricted least squares regression. J. Am. Stat. Assoc. 78, 837–842 (1983)
 29.
Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957–961 (1967)
 30.
Kong, T.Y., Pajoohesh, H., Herman, G.T.: Stringaveraging algorithms for convex feasibility with infinitely many sets. Inverse Probl. 35, 045011 (2019)
 31.
Kopecká, E., Reich, S.: A note on the von Neumann alternating projections algorithm. J. Nonlinear Convex Anal. 5, 379–386 (2004)
 32.
López, G., MartinMárquez, V., Xu, H.: Halpern’s iteration for nonexpansive mappings. Contemp. Math. 513, 211–231 (2010)
 33.
Reich, S., Zalas, R.: A modular string averaging procedure for solving the common fixed point problem for quasinonexpansive mappings in Hilbert space. Numer. Algorithms 72, 297–323 (2016)
 34.
von Neumann, J.: Functional Operators II: The Geometry of Orthogonal Spaces. Princeton University Press, Princeton (1950). Reprint of mimeographed lecture notes first distributed in 1933
 35.
Wittman, R.: Approximation of fixed points of nonexpansive mappings. Arch. Math. 58, 486–491 (1992)
Acknowledgements
We are grateful to Shoham Sabach and Alexander Zaslavski for their very useful comments on an earlier version of this work. We thank the referee and the editor for their comments.
Funding
This work was supported by the ISFNSFC joint research program grant No. 2874/19.
Author information
Affiliations
Contributions
All authors contributed equally and significantly in this research work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Censor, Y., Nisenbaum, A. Stringaveraging methods for best approximation to common fixed point sets of operators: the finite and infinite cases. Fixed Point Theory Algorithms Sci Eng 2021, 9 (2021). https://doi.org/10.1186/s13663021006944
Received:
Accepted:
Published:
Keywords
 Stringaveraging
 Common fixed points
 Best approximation problem
 Firmly nonexpansive operators
 HLWB algorithm
 Halpern theorem
 Projection methods