Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CGLS: deprecate tolerance #1892

Merged
merged 11 commits into from
Aug 22, 2024
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
- Added SVRG and LSVRG stochastic functions (#1625)
- Added SAG and SAGA stochastic functions (#1624)
- Allow `SumFunction` with 1 item (#1857)
- Added callback `optimisation.utilities.callbacks.EarlyStoppingObjectiveValue` which stops iterations if an algorithm objective changes less than a provided threshold (#1892)
- Added callback `optimisation.utilities.callbacks.CGLSEarlyStopping` which replicates the automatic behaviour of CGLS in CIL versions <=24. (#1892)
- Enhancements:
- Use ravel instead of flat in KullbackLeibler numba backend (#1874)
- Upgrade Python wrapper (#1873, #1875)
Expand All @@ -12,6 +14,8 @@
- Bug fixes:
- `ImageData` removes dimensions of size 1 from the input array. This fixes an issue where single slice reconstructions from 3D data would fail due to shape mismatches (#1885)
- Make Binner accept accelerated=False (#1887)
- Changes that break backwards compatibility:
- CGLS will no longer automatically stop iterations once a default tolerance is reached. The option to pass `tolerance` will be deprecated to be replaced by `optimisation.utilities.callbacks` (#1892)


* 24.1.0
Expand Down
81 changes: 47 additions & 34 deletions Wrappers/Python/cil/optimisation/algorithms/CGLS.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,59 +19,74 @@
from cil.optimisation.algorithms import Algorithm
import numpy
import logging
import warnings

log = logging.getLogger(__name__)


class CGLS(Algorithm):

r'''Conjugate Gradient Least Squares algorithm
r'''Conjugate Gradient Least Squares (CGLS) algorithm

The Conjugate Gradient Least Squares (CGLS) algorithm is commonly used for solving large systems of linear equations, due to its fast convergence.

Problem:

.. math::

\min || A x - b ||^2_2

|

Parameters :

:parameter operator : Linear operator for the inverse problem
:parameter initial : Initial guess ( Default initial = 0)
:parameter data : Acquired data to reconstruct
:parameter tolerance: Tolerance/ Stopping Criterion to end CGLS algorithm

Reference:
https://web.stanford.edu/group/SOL/software/cgls/
\min_x || A x - b ||^2_2


Parameters
------------
operator : Operator
Linear operator for the inverse problem
initial : (optional) DataContainer in the domain of the operator, default is a DataContainer filled with zeros.
Initial guess
data : DataContainer in the range of the operator
Acquired data to reconstruct

Note
-----
Passing tolerance directly to CGLS is being deprecated. Instead we recommend using the callback functionality: https://tomographicimaging.github.io/CIL/nightly/optimisation/#callbacks and in particular the CGLSEarlyStopping callback replicated the old behaviour.

Reference
---------
https://web.stanford.edu/group/SOL/software/cgls/
'''
def __init__(self, initial=None, operator=None, data=None, tolerance=1e-6, **kwargs):
def __init__(self, initial=None, operator=None, data=None, **kwargs):
'''initialisation of the algorithm

:param operator : Linear operator for the inverse problem
:param initial : Initial guess ( Default initial = 0)
:param data : Acquired data to reconstruct
:param tolerance: Tolerance/ Stopping Criterion to end CGLS algorithm
'''
#We are deprecating tolerance
self.tolerance=kwargs.pop("tolerance", None)
if self.tolerance is not None:
warnings.warn( stacklevel=2, category=DeprecationWarning, message="Passing tolerance directly to CGLS is being deprecated. Instead we recommend using the callback functionality: https://tomographicimaging.github.io/CIL/nightly/optimisation/#callbacks and in particular the CGLSEarlyStopping callback replicated the old behaviour")
else:
self.tolerance = 0

super(CGLS, self).__init__(**kwargs)

if initial is None and operator is not None:
initial = operator.domain_geometry().allocate(0)
if initial is not None and operator is not None and data is not None:
self.set_up(initial=initial, operator=operator, data=data, tolerance=tolerance)

def set_up(self, initial, operator, data, tolerance=1e-6):
'''initialisation of the algorithm
self.set_up(initial=initial, operator=operator, data=data)

def set_up(self, initial, operator, data):
r'''Initialisation of the algorithm
Parameters
------------
operator : Operator
Linear operator for the inverse problem
initial : (optional) DataContainer in the domain of the operator, default is a DataContainer filled with zeros.
Initial guess
data : DataContainer in the range of the operator
Acquired data to reconstruct

:param operator: Linear operator for the inverse problem
:param initial: Initial guess ( Default initial = 0)
:param data: Acquired data to reconstruct
:param tolerance: Tolerance/ Stopping Criterion to end CGLS algorithm
'''

log.info("%s setting up", self.__class__.__name__)
self.x = initial.copy()
self.operator = operator
self.tolerance = tolerance

self.r = data - self.operator.direct(self.x)
self.s = self.operator.adjoint(self.r)
Expand Down Expand Up @@ -110,7 +125,7 @@ def update(self):
#self.p = self.s + self.beta * self.p
self.p.sapyb(self.beta, self.s, 1, out=self.p)

self.normx = self.x.norm()
self.normx = self.x.norm()# TODO: Deprecated, remove when CGLS tolerance is removed


def update_objective(self):
Expand All @@ -119,17 +134,15 @@ def update_objective(self):
raise StopIteration()
self.loss.append(a)

def should_stop(self):
def should_stop(self): # TODO: Deprecated, remove when CGLS tolerance is removed
return self.flag() or super().should_stop()

def flag(self):
def flag(self): # TODO: Deprecated, remove when CGLS tolerance is removed
'''returns whether the tolerance has been reached'''
flag = (self.norms <= self.norms0 * self.tolerance) or (self.normx * self.tolerance >= 1)

if flag:
self.update_objective()
if self.iteration > self._iteration[-1]:
print (self.verbose_output())
print('Tolerance is reached: {}'.format(self.tolerance))

return flag
53 changes: 53 additions & 0 deletions Wrappers/Python/cil/optimisation/utilities/callbacks.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@

from tqdm.auto import tqdm as tqdm_auto
from tqdm.std import tqdm as tqdm_std
import numpy as np


class Callback(ABC):
Expand Down Expand Up @@ -134,3 +135,55 @@ class LogfileCallback(TextProgressCallback):
def __init__(self, log_file, mode='a', **kwargs):
self.fd = open(log_file, mode=mode)
super().__init__(file=self.fd, **kwargs)

class EarlyStoppingObjectiveValue(Callback):
'''Callback that stops iterations if the change in the objective value is less than a provided threshold value.

Parameters
----------
threshold: float, default 1e-6

Note
-----
This callback only compares the last two calculated objective values. If `update_objective_interval` is greater than 1, the objective value is not calculated at each iteration (which is the default behaviour), only every `update_objective_interval` iterations.

'''
def __init__(self, threshold=1e-6):
self.threshold=threshold
Comment on lines +139 to +152
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VERY strongly disapprove of this. Far superior naming, vis. https://keras.io/api/callbacks/early_stopping/:

  • EarlyStopping, EarlyStoppingObjectiveDelta, EarlyStoppingObjectiveDelta, EarlyStoppingObjectiveChange
  • delta, change



def __call__(self, algorithm):
if len(algorithm.loss)>=2:
if np.abs(algorithm.loss[-1]-algorithm.loss[-2])<self.threshold:
raise StopIteration

class CGLSEarlyStopping(Callback):
'''Callback to work with CGLS. It causes the algorithm to terminate if :math:`||A^T(Ax-b)||_2 < \epsilon||A^T(Ax_0-b)||_2` where `epsilon` is set to default as '1e-6', :math:`x` is the current iterate and :math:`x_0` is the initial value.
It will also terminate if the algorithm begins to diverge i.e. if :math:`||x||_2> \omega`, where `omega` is set to default as 1e6.
Parameters
----------
epsilon: float, default 1e-6
Usually a small number: the algorithm to terminate if :math:`||A^T(Ax-b)||_2 < \epsilon||A^T(Ax_0-b)||_2`
omega: float, default 1e6
Usually a large number: the algorithm will terminate if :math:`||x||_2> \omega`

Note
-----
This callback is implemented to replicate the automatic behaviour of CGLS in CIL versions <=24. It also replicates the behaviour of https://web.stanford.edu/group/SOL/software/cgls/.
'''
def __init__(self, epsilon=1e-6, omega=1e6):
self.epsilon=epsilon
self.omega=omega


def __call__(self, algorithm):

if (algorithm.norms <= algorithm.norms0 * self.epsilon):
print('The norm of the residual is less than {} times the norm of the initial residual and so the algorithm is terminated'.format(self.epsilon))
raise StopIteration
self.normx = algorithm.x.norm()
if algorithm.normx >= self.omega:
print('The norm of the solution is greater than {} and so the algorithm is terminated'.format(self.omega))
raise StopIteration


Loading
Loading