Go to the source code of this file.
Data Structures | |
class | PruningParams |
class | Pruner< FT > |
Pruner class, to compute and optimize cost and success probability of pruned enumeration. More... | |
Macros | |
#define | PRUNER_MAX_N 2047 |
Functions | |
template<class FT > | |
int | run_pruner_f (ZZ_mat< mpz_t > &B, int sel_ft, int precision=0, int prune_start=0, int prune_end=1, double prune_pre_nodes=1e6, double prune_min_prob=-1, double gh_factor=1.0) |
Run pruner on B. More... | |
int | run_pruner (ZZ_mat< mpz_t > &B, FloatType float_type=FT_DEFAULT, int precision=0, int prune_start=0, int prune_end=1, double prune_pre_nodes=1e6, double prune_min_prob=-1, double gh_factor=1.0) |
Run pruner on B. More... | |
template<class FT > | |
void | prune (PruningParams &pruning, const double enumeration_radius, const double preproc_cost, const vector< double > &gso_r, const double target=.9, const PrunerMetric metric=PRUNER_METRIC_PROBABILITY_OF_SHORTEST, const int flags=PRUNER_GRADIENT) |
Search for optimal pruning parameters. More... | |
template<class FT > | |
void | prune (PruningParams &pruning, double enumeration_radius, const double preproc_cost, const vector< vector< double > > &gso_rs, const double target=.9, const PrunerMetric metric=PRUNER_METRIC_PROBABILITY_OF_SHORTEST, const int flags=PRUNER_GRADIENT) |
Search for optimal Pruning parameters, averaging over several basis. More... | |
template<class FT > | |
FT | svp_probability (const PruningParams &pruning) |
Probability that a pruned enumeration indeed returns the shortest vector. More... | |
template<class FT > | |
FT | svp_probability (const vector< double > &pr) |
Probability that a pruned enumeration indeed returns the shortest vector. More... | |
Macro Definition Documentation
◆ PRUNER_MAX_N
#define PRUNER_MAX_N 2047 |
Function Documentation
◆ prune() [1/2]
void prune | ( | PruningParams & | pruning, |
const double | enumeration_radius, | ||
const double | preproc_cost, | ||
const vector< double > & | gso_r, | ||
const double | target = .9 , |
||
const PrunerMetric | metric = PRUNER_METRIC_PROBABILITY_OF_SHORTEST , |
||
const int | flags = PRUNER_GRADIENT |
||
) |
Search for optimal pruning parameters.
Wrapping function for the Pruner class. The function is templated by a Floating Point Type, which are used for internal computations.
Calling M the metric function (Success proba or expectation of number of results), C the cost function of enumeration, P the preproc_cost, this function tries to provide a pruning vector p essentially minimizing (C(p) + P) / M(p). The pruning vector is expressed as relative (squared) length with respect to the enumeration radius, i.e. the entries of p are in the interval (0,1), and are always decreasing.
Small amendents are made to also take account for the desired target value of the metric after repeating several enumeration. In particular the chosen pruning value should not excess the target, and a bit more enumeration may be chosen if it get is to the the desired result without any retrial.
The cost C of enumeration under the gaussian heuristic, and depends on gso_r. The function S is define by the metric parameter (and depend on gso_r for EXPECTED_SOLUTIONS)
This function provides access to three algorithm to optimize those parameter
- greedy (deactivated by setting flag PRUNER_START_FROM_INPUT)
- gradient decent (activated by setting flag PRUNER_GRADIENT)
- Nelder-Mead Algorithm (activated by setting flag PRUNER_NELDER_MEAD)
If several algortihm are activated, they will all be ran, using the output of the previous as there starting point.
Greedy shoud be very fast, and essentially tries to produce pruning parameters generating a flat enumeration tree. It is meant to quickly provide pruning parameters, say up to block 40. It is also used as a decent starting point for the gradient descent to be quicker. It can be argued that this strategy is a factor O(n) away from optimal, probably less in practice.
The gradient descent is rather standard. The gradient is computed numerically, and then several steps are made keeping the same direction until a local minima in this direction is found (the rational is that computing the gradient is 2n times more expensive than test a point). It is to be expected that this method suffers from convergence and numerical instability especially due to numerical differentiation. It remains rather fast and seems to properly handle basis up to dimension LLL reduced-basis up to dim ~70, and more if the basis is strongly reduced.
The Nelder-Mead algorithm is a descent algorithm praised for its robustness. It's pretty cool, you should really read about it: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method . In brief, it maintains a simplex of points and each step consist in improving the worst of them by reflexion expansion or contraction, or shrinking the all simplex if the above failed. This method is somehow aware not only of the gradient, but also the curvature of the optimized function. Thanks to contraction and expansion, it can slowly "pass through the eye of a needle," and re-accelerate the descent after that.
Maybe you don't care so much of all this... I'm just procratinating low-level documentation.
- Parameters
-
pruning Output of the function. Also used as an input if PRUNER_START_FROM_INPUT is on. enumeration_radius radius of the desired enumeration preproc_cost cost of preprocessing (i.e. additive cost for a retrying an enumeration) gso_r vector of Gram-Schmidt length (squared) of the enumerated block target desired target success probability/expected solutions after all retrial. metric metric is to be optimized : PRUNER_METRIC_PROBABILITY_OF_SHORTEST or PRUNER_METRIC_EXPECTED_SOLUTIONS flags complementary parameters : PRUNER_CVP PRUNER_START_FROM_INPUT PRUNER_GRADIENT PRUNER_NELDER_MEAD PRUNER_VERBOSE
- Returns
- Return value is done by reference on parameter pruning.
◆ prune() [2/2]
void prune | ( | PruningParams & | pruning, |
double | enumeration_radius, | ||
const double | preproc_cost, | ||
const vector< vector< double > > & | gso_rs, | ||
const double | target = .9 , |
||
const PrunerMetric | metric = PRUNER_METRIC_PROBABILITY_OF_SHORTEST , |
||
const int | flags = PRUNER_GRADIENT |
||
) |
Search for optimal Pruning parameters, averaging over several basis.
Wrapping function for the Pruner class.
Same as prune(), but averaging cost over several bases (without slowing down the search)
- Parameters
-
pruning Output of the function. Also used as an input if PRUNER_START_FROM_INPUT is on. enumeration_radius radius of the desired enumeration preproc_cost cost of preprocessing (i.e. additive cost for a retrying an enumeration) gso_rs vector of vector Gram-Schmidt length (squared) of the enumerated block over several bases target desired target success probability/expected solutions after all retrial. metric metric is to be optimized : PRUNER_METRIC_PROBABILITY_OF_SHORTEST or PRUNER_METRIC_EXPECTED_SOLUTIONS flags complementary parameters : PRUNER_CVP PRUNER_START_FROM_INPUT PRUNER_GRADIENT PRUNER_NELDER_MEAD PRUNER_VERBOSE
◆ run_pruner()
int run_pruner | ( | ZZ_mat< mpz_t > & | B, |
FloatType | float_type = FT_DEFAULT , |
||
int | precision = 0 , |
||
int | prune_start = 0 , |
||
int | prune_end = 1 , |
||
double | prune_pre_nodes = 1e6 , |
||
double | prune_min_prob = -1 , |
||
double | gh_factor = 1.0 |
||
) |
Run pruner on B.
- Parameters
-
B basis of the lattice to be reduced float_type specifies the data type used for GSO computations (see defs.h for options) precision specifies the precision if float_type=FT_MPFR (and needs to be > 0 in that case) ignored otherwise prune_start start index of pruning (first index being 0) prune_end (prune_end-1) is end index of pruning prune_pre_nodes preprocessing cost in the number of nodes prune_min_prob target probability or expected number of solutions (depending on the metric which could be either PRUNER_METRIC_PROBABILITY_OF_SHORTEST or PRUNER_METRIC_EXPECTED_SOLUTIONS. If PRUNER_SINGLE is enabled, the optimization will try to minimize the single_enum_cost() while achieving the prune_min_prob; If PRUNER_SINGLE is disabled (default), the optimization will try to minimize single_enum_cost() * trials + preproc_cost * (trials - 1.0). gh_factor input GH factor to compute the enumeration radius. The computed enumeration radius will be min(GH*gh_factor, |b_i*|).
- Returns
- the status of the pruning
◆ run_pruner_f()
int run_pruner_f | ( | ZZ_mat< mpz_t > & | B, |
int | sel_ft, | ||
int | precision = 0 , |
||
int | prune_start = 0 , |
||
int | prune_end = 1 , |
||
double | prune_pre_nodes = 1e6 , |
||
double | prune_min_prob = -1 , |
||
double | gh_factor = 1.0 |
||
) |
Run pruner on B.
- Parameters
-
B basis of the lattice to be reduced sel_ft specifies the float type used for GSO computations precision specifies the precision if sel_ft=FT_MPFR (and needs to be > 0 in that case) prune_start start index of pruning (first index being 0) prune_end (prune_end-1) is end index of pruning prune_pre_nodes preprocessing cost in the number of nodes prune_min_prob target probability. If it is -1, it will optimize single_enum_cost/succ. prob while not fixing the succ. prob. If it is > 0, it will fix the succ. prob and optimize the single_enum_cost. gh_factor input GH factor to compute the enumeration radius. The computed enumeration radius will be min(GH*gh_factor, |b_i*|).
- Returns
- the status of the pruning
◆ svp_probability() [1/2]
FT svp_probability | ( | const PruningParams & | pruning | ) |
Probability that a pruned enumeration indeed returns the shortest vector.
Probability that a pruned enumeration indeed returns the shortest vector. Wrapping function for the Pruner class.
- Parameters
-
pruning A pruning object. Only uses the coefficient field. (coefficients must start with two 1s, be decreasing and in the interval (0,1))
- Returns
- probability
◆ svp_probability() [2/2]
FT svp_probability | ( | const vector< double > & | pr | ) |
Probability that a pruned enumeration indeed returns the shortest vector.
- Parameters
-
pr A vector of pruning bounds (coefficients must start with two 1s, be decreasing and in the interval (0,1))
- Returns
- probability