small attack on lwe

Edit: new version

Here is a humble (and shy) attack on lwe with a binary secret and a gaussian error. It is inspired by the BKV attack, but it needs much less samples. The basic idea behind it is that all samples can be useful and provide information, you just need to know how to look at them under the right POV.

There are several codes, we will start with a simple attack on few bits and then add new functions to tackle bigger keys, for now, I can handle 60 bits keys, and I will go higher soon.

To run them:

  • Install rust

  • in the Cargo.toml file, add

    
    [dependencies]
    rand = "0.9.0"
    rand_distr = "0.5.1"
    rayon = "1.10.0"
  • cargo run –release

Let’s start with the basic idea.

Basic principle

LWE is a a mathematical problem I will not detail too much here, you just need to know that it is at the core of several post quantum crypto system recently proposed for the reason that we do not have any quantum algorithm that could destroy them (yet).

To sum it up (way too) quickly the LWE problem is a set of linear equations of size where all coefficients are being sampled uniformly over a ring, for example , ( in theory, the problem is not defined over a ring, ut in real life, it is). Those coefficients can be seen as a vector of . The unknown is also a vector of , thus each linear equation can be seen as a scalar product between two vector of having their result in .

Solving that system would be easy if there was no complication. It happens that adding a small error to , leads to one of the toughest cryptographic problem to solve.

Problems in cryptography are often not very complicated, but they are hard. It is easy to find a way to solve LWE, yet it is very expensive in terms of computation to do so.

In an lwe sample, or its ring version, the last term is calculated as follow:

Usually follows a (discrete) gaussian distribution, but it is not mandatory. In cryptographic systems relying on LWE, the key is often taken in and not in . We take advantage of that for the attack.

I suggest a method inspired by an attack called BKV, it is also similar to a method used in astronomy for getting clear pictures from a set of blurry ones. The idea is to pile up several results and get a signal by accumulating a deviation from the mean while the noise averages to zero. We create samples following an LWE (like) distribution, each of which is of size , the first terms are the s and the last term is .

In order to perform the attack we want a guarantee that the scalar product calculated in is in the interval . By that I mean

What I mean by that is that the scalar product does not experience any reduction modulo . If we were to know the real value of , then the problem would be easy to solve. We can do that by selecting samples where the values of each is so small that there is a good probability for the sum of the picked up uniformly to lie in the interval .

For example, a radical solution to achieve such a feat is to set a limit on each such that the . Then the limit

Such a drastic constraint leads to very few samples being accepted, the probability of finding a random sample verifying this is

which is worse than exponential.

We can already increase the number of samples by taking the hamming weight instead of . The hamming weight is the number of non zero bits of , which is equal to the sum of all , we usually have .

the probability becomes

which is still terrible but much higher.

If we set a limit for each we miss all the sample which "almost made it". Some samples have coefficients all below the limit except for few of them. They can certainly be of statistical usefulness.

The constraint we use instead is that the sum of the must be below a given threshold. Given that the negative terms often compensate for the positive terms and vice versa, having a sample that should be accepted is much more likely than with the previous constraint. If where all the follows a uniform distribution , then the probability

Now, let’s imagine we picked up samples that are guaranteed to lie in with a good probability.

The sample mean calculated in should evaluate to:

We can define a second mean where the term has been removed, and it should also evaluate to:

So that

If is positive and the scalar product lies in the interval the term contributes “positively“ to the scalar product, and negatively otherwise, the rest of the scalar product contributes on average for (zero would be more convenient, but the modulus taken is even).

This is the basic idea behind the algorithm:

circular mean

The distribution being uniform, the samples whose scalar product (calculated in ) guaranteed to be in , become rarer and rarer when grows.

But we can recover some information knowing the hamming weight of the sample, and arguably, more information than just reducing the key by 1 bit, or at least, we can find more samples without knowing specifically one bit.

Let’s demonstrate a small example. We choose the modulus to be with representatives in , , if

and

then the "mean" of the is the sample is not "centered". In addition, the distance to 0 is , the distance to its mean is .

Now imagine that we know the hamming weight, then we can search for a smaller distance to the mean because we can transform .

To do so, we search for other representatives, compute the mean in that representative system and see if the distance to the mean is smaller than the distance we started with, here 24.

With respective means of , , , , and distances to the mean of

The smallest distance to the mean is obtained by the third choice of representative. So let’s ignore the fact that the mean is not an integer, and let’s take 8 as a mean. We now remove to all the

Its distance to 0 is .

And to get a valid sample, we have to remove from , where h is the hamming weight of the sample, here 3.

What happened here is that we computed the circular mean and we centered the sample around that mean. The transformation on is made to keep the sample valid.

Hereafter is a first simple version of the algorithm that works for small keys, knowing only the total hamming weight of the key.

For 2000000 samples, with a noise having a standard deviatin of 50 and a threshold of which means samples are selected if , you should get a result looking like that:

image

V is for a match between the real key and the key found, and X is for a mismatch. Lowering the threshold gives better results, but you need more samples.

use rand::Rng;
use rand::rng;
use rand_distr::{Uniform, Normal, Distribution};
use rayon::prelude::*;
use std::convert::TryInto;


fn multiply<const K: usize,const K_1: usize>(array: &mut [i64; K_1],h:usize)
{

    //let mut temp_array:[i64;K] = array[..K].par_iter().map(|&x| x ).collect::<Vec<i64>>().try_into().unwrap();
    let mut temp_array= array.clone();
    let mut best_array = temp_array.clone();
    let mut best_distance = temp_array[..K].par_iter().map(|&x| x.abs() as i128).sum::<i128>();
    let mut mean = 0;
    let mut best_mean = 0;

    (1..5).into_iter().for_each(|i|{
        
        temp_array= array.clone();
        temp_array.iter_mut().for_each(|x|*x = (*x)*(i as i64));
        mean = roll_subarray::<K,K_1>(&mut temp_array,0,K);
        let distance:i128 = temp_array[..K].par_iter().map(|&x| x.abs() as i128).sum::<i128>();
        if distance < best_distance
        {
            //println!("in");
            best_distance = distance;
            best_array = temp_array.clone();
            best_mean = mean;

        } 
    });


    println!("out{}",best_distance.ilog2());  
    array[..K].iter_mut().zip(best_array.iter()).for_each(|(x,y)|*x = *y);
    array[K] = array[K] -(h as i64)*best_mean;
      
}


fn roll_subarray<const K: usize,const K_1: usize>(array: &mut [i64; K_1], start: usize,end: usize) -> i64 {
    let m = end - start; 
    assert!(m <= K, "m must be less than or equal to K");

    // Sélectionner les m premiers éléments consécutifs
    let mut temp_array: Vec<i128> = array[start..end].par_iter().map(|&x| x as i128).collect();

    let mut best_distance = i128::MAX;
    let mut best_array = temp_array.clone();
    let mut final_mean = 0;
    for _ in 0..m {
        // Calculer la moyenne du tableau temporaire
        let mean: i128 = temp_array.par_iter().sum::<i128>() / (m as i128);

        // Calculer la distance à la moyenne
        let distance: i128 = temp_array.par_iter().map(|&x| (x - mean).abs()).sum();

        // Mettre à jour la meilleure distance et le tableau correspondant
        if distance < best_distance {
            best_distance = distance;
            best_array.copy_from_slice(&temp_array);
            final_mean = mean;
            println!("{}",(best_distance as f64).log2())
        }

        // Augmenter le plus petit coefficient de 2^64
        if let Some((index, _)) = temp_array.par_iter().enumerate().min_by_key(|&(_, &x)| x) {
            temp_array[index] += 1i128 << 64;
        }
    }
    // définir un masque pour les i128 : 2^64 -1
    let mod_mask = (1i128 << 64) - 1;

    // Mettre à jour les m premiers éléments du tableau original
    for (i, &value) in best_array.iter().enumerate() {
        array[i+start] = ((value -final_mean) & mod_mask) as i64;
    }

    // Réduire le tableau et la moyenne modulo 2^64 avec un masque

    final_mean = final_mean & mod_mask;

    // Retourner la moyenne
    final_mean as i64
}    


//main single hamming weight
//*
fn main() {
    // Taille du vecteur et des tableaux
    let n: usize = 4_000_000; // Taille de l'échantillon
    const K: usize = 20; // dimension du vecteur lwe
    const K_1: usize = 21;
    let hamming = 10; // Nombre de valeurs non nulles dans s

    // Distribution uniforme pour les valeurs a[i]
    let uniform_dist = Uniform::new(i64::MIN, i64::MAX).expect("Failed to create uniform distribution");

    // Distribution gaussienne pour l'erreur e
    let sigma = 50.0;
    let normal_dist = Normal::new(0.0, sigma).expect("Failed to create normal distribution");


    // Générer le tableau s de taille K avec un nombre spécifié de valeurs non nulles
    let mut s = vec![0; K];
    let mut t = 0;
    while t < hamming {
        let index = (rng().random::<u64>() % (K as u64)) as usize;
        if s[index] == 0
        {
            s[index] = 1;
            t+= 1;
        } 
        
    }


    let mut vect: Vec<[i64; K + 1]> = (0..n)
        .into_par_iter() // Utilisation de rayon pour paralléliser
        .map(|_| {
            // Créer un générateur de nombres aléatoires pour chaque thread
            let mut rng = rand::rng();
            let mut array: [i64; K + 1] = [0; K + 1];

            // Générer les K premières valeurs a[i]
            for i in 0..K {
                array[i] = uniform_dist.sample(&mut rng);
            }


            // Calculer la somme pondérée avec une erreur gaussienne
            let sum: i64 = s.iter().zip(&array[..K]).map(|(&si, &ai)| si * ai).sum();
            let error: i64 = normal_dist.sample(&mut rng) as i64;
            let last_value = sum + error;

            // Ajouter la valeur calculée au tableau
            array[K] = last_value;

            array
        })
        .collect();

    // Moyenne circulaire
    //*
    vect.par_iter_mut().for_each(|x|{
        let mean = roll_subarray::<K,K_1>( x, 0, K);
        x[K] = x[K] - mean*hamming;
    });
    // */
    
        // Fonction pour sélectionner les tableaux en parallèle
    let threshold = 65.3;
        
    let mut selected_vect: Vec<[i64; K + 1]> = vect.into_par_iter()
        .filter(|&array|{ 
            let acc = array[..K].iter().map(|&x| x.abs() as i128).sum::<i128>() as i128;
            (acc as f64).log2() < threshold
        })
        .collect();    
    

    let mut b:Vec<i128> =vec![0;K];
    let mut moyenne_a = vec![0i128;K];
    
    let len = selected_vect.len();
    for j in 0..len
    {   
        moyenne_a.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{ *x= *x + (*y).abs() as i128 } );
        b.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{*x= *x +(((*y).signum())*selected_vect[j][K]) as i128 } );    
    }
    println!();

    
    let mut max=moyenne_a[0];
    moyenne_a.iter_mut().for_each(|x|{if *x > max{max = *x};});
    let moy_b:i128 = (b.iter().sum::<i128>())/(K as i128); 
    b.iter_mut().for_each(|x|{*x = *x -moy_b ;println!("b {}",*x)});
    
    b.iter_mut().zip(moyenne_a.iter_mut()).for_each(|(x,y)|{
        if *y != 0
        {
            let temp= (*x)*((max)/(*y));
            if (*x) > 0 && temp < 0 || (*x) < 0 && temp > 0
            {
                println!("parité{}",*x);
            } 
            *x = ((*x as f64)*((max as f64)/(*y as f64))) as i128;
        }
    });
    
    let mut r:Vec<f64> = vec![0.0;K];

    println!();
    println!(" faux s: "); 
    
    for i in 0..K
    {
        r[i]= (b[i] as f64)/((K*len) as f64) ;  //-0.5;
        println!(" {} :{} ",i+1,r[i]);
    }

    let mut moy0:f64 =0.0;
    let mut moy1:f64 =0.0;
    let mut standard_dev=0.0;
    let mut compt0:f64 =0.0;
    let mut compt1:f64 =0.0;
    for i in 0..K{
        if s[i] == 1 {
            moy1+=r[i];
            compt1+=1.0;
        }else{
            moy0+=r[i];
            compt0+=1.0;
        }         
    }

    standard_dev = standard_dev/(K as f64-1.0);
    standard_dev = standard_dev.sqrt();
    let moymoy =-0.5;
    moy0=moy0/compt0;
    moy1=moy1/compt1;    
    println!("moyenne des 1:{}",moy1);
    println!("moyenne des 0:{}",moy0);
    println!("moymoy       :{}",moymoy);
    println!("standard dev :{}",standard_dev);
    println!();
    let mut round_key:Vec<i64> = vec![0;K];
    let mut round_key_2:Vec<i64> = vec![0;K];
        for i in 0..K{ 
            if r[i] < 0.0{
                round_key[i]=0;    
            }
            else{
                round_key[i]=r[i].round() as i64;    
            }
            if round_key[i] > 1{
                round_key[i]=1;
            }        
        }
        for i in 0..K{ 
            if r[i]-moymoy < 0.0{
                round_key_2[i]=0;    
            }
            else{
                round_key_2[i]=(r[i]-moymoy).round() as i64;    
            }
            if round_key_2[i] > 1{
                round_key_2[i]=1;
            }        
        }       

    println!();
    print!("   indice   :");
    for i in 0..K{
        if i+1<10{
            print!(" {} ",i+1);    
        }else{
            print!("{} ",i+1);
        }
        
    }
    println!();
    print!("  round_key :");
    for i in 0..K{
        print!(" {} ",round_key[i]);
    }
    println!();
    println!();
    println!();
    print!("round_key_2 :");
    for i in 0..K{
        print!(" {} ",round_key_2[i]);
    }
    println!();
    println!();
    
    print!("     vrai s :");
    for i in 0..K{
        print!(" {} ",s[i]);
    }
    
    println!();
    println!();
    
    print!("differences :");
    for i in 0..K{
        if round_key[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!();
    
    print!("differences2:");
    for i in 0..K{
        if round_key_2[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!("compteur:{}",len);
    println!("nombre d'echantillons:{}",n); 

}

// */

Parsing millions of samples is a bit tedious and the number of valid samples decreases rapidly when increases. To increase the size of the key we can recover, we can use two complementary tricks. The first is linear combinations. Making linear combinations between samples which have "small" components leads to samples which have increasingly smaller components. The idea is then to select the few samples which have very small components, and then combine them together. By small we mean that their distribution over a ring does not end being uniform by "overlapping".

The second trick we use is having partial knowledge of the key, namely the hamming weights of chuncks of arbitrary size. Let’s imagine the chuncks have a size of 20, then the key can have either 0, or 1, or 2 or ...20 non zero bits, thus in a real life situation, we can just try all the possible hamming weights, which multiply the duration of the calculation by .

Assuming we know the hamming weight on each chunck of 20 bits, we can increase the number of available samples by a lot. We use a sort of "circular mean".

In the code, the function finding the circular mean is called "roll_subarray". We apply the "roll_subarray" function to chuncks of size 20 for each sample. The roll_subarray finds the circular mean of each chunk and center all the sample values around that mean.

The function allows to slightly transform the distribution, then, we can make linear combinations between transformed samples to increase the likelihood of finding "small" samples.

A real life search for a 40 bits key would then take 21*21 = 443 times more time than the duration to find a key knowing the hamming weights of 20 bits chuncks.

Chuncks

The next algorithm uses the principle described above, although, it used chuncks of much smaller size (5). We will describe a better method later. For a 40 bits keys I suggest having a threshold around 65.7 and 2000 initial samples, you can go lower by having more samples but they become increasingly rare, you can put delta do 10.0. The second version of the algorithm has some degree of parallelism. The lower the threshold, the best the signal, but the less samples you get.

After 10 linear combinations, the standard deviation of the error should have grown by a factor of , which is perfectly acceptable. Knowing the haming weight of chuncks of size is a lot of information though.

In a realistic situation, the duration of the computation would be multiplied by a factor of , (because there could be 0,1,2,3,4 or 5 non zero bits in each chunck). It is preferable to have the lowest number of non zero bits to get some signal.

The smaller the hamming weight of the key, the better the signal, because the more likely the scalar product will stay in the interval. If a key has a high hamming weight, we can change the key to get a signal with a small hamming weight by removing from . For example, a key of size having a hamming weight of is as difficult to find than a key of sie with a hamming weight of .

where

I came up recently with a new version of the algorithm, and I am working on a 60 bits key divided into 3 chuncks of 20 bits. For now, I recovered 49 bits from the 60 bits, but I forgot to put a non zero error.

image

twin samples

By construction, the algorithm manages to benefit from "twin samples". Twin samples are samples whose components can be very "far" from each other, but have the same variations. They are much more frequent than samples having just small components. For example, with the modulus and representatives in and two samples, and , knowing the hamming weight, we can change it to .

So the new version search for a twin, or an approximate one, in the first 20 bits, then, the size of the twin to be searched increases to 40, then to 60. I let the reader figure out why this does work, because the reader is usually smarter than I am, obviously.

Here is the new version:

use rand::Rng;
use rand::rng;
use rand_distr::{Uniform, Normal, Distribution};
use rayon::prelude::*;


//*
fn find_a_twin_sub<const K: usize, const K_1: usize>(
    array_1: &mut [i64; K_1],
    array_2: &mut [i64; K_1],
    hi: usize,
    start:usize,
    end:usize
)->i128 {
    let mut temp_array_1 = array_1.clone();
    let temp_array_2 = array_2.clone();

    let mut best_distance = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum::<i128>();
    //temp_array_1.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x -= *y);
    temp_array_1.par_iter_mut().zip(temp_array_2.par_iter()).for_each(|(x, y)| *x -= *y);
    
    let mean = roll_subarray::<K, K_1>(&mut temp_array_1, start, end);
    temp_array_1[K] -= mean*(hi as i64);
    
    
    let distance: i128 = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum();
    
    if distance < best_distance {
        best_distance = distance;
        (*array_1).copy_from_slice(&temp_array_1);
    }
    //let dee = (best_distance as f64).log2();
    //println!("out{}", dee);
    //dee
    best_distance
}
// */

//*
fn find_a_twin_add<const K: usize, const K_1: usize>(
    array_1: &mut [i64; K_1],
    array_2: &mut [i64; K_1],
    hi: usize,
    start:usize,
    end:usize
)->i128 {
    let mut temp_array_1 = array_1.clone();
    let temp_array_2 = array_2.clone();

    let mut best_distance = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum::<i128>();

    //temp_array_1.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x += *y);
    temp_array_1.par_iter_mut().zip(temp_array_2.par_iter()).for_each(|(x, y)| *x += *y);
    
    let mean = roll_subarray::<K, K_1>(&mut temp_array_1, start, end);
    temp_array_1[K] -= mean*(hi as i64);
    
    
    let distance: i128 = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum();
    
    if distance < best_distance {
        best_distance = distance;
        (*array_1).copy_from_slice(&temp_array_1);
    }
    //let dee = (best_distance as f64).log2();
    //println!("out{}", dee);
    //gcd_of_array::<K,K_1>(array_1);
    //dee
    best_distance
}
// */


//*
fn reduce_by_twin<const K: usize, const K_1: usize>(
    vec_1: &mut Vec<[i64; K_1]>,
    vec_2: &mut Vec<[i64; K_1]>,
    n: usize,
    h_v: &[usize],
    nb_chunck: usize
) {
    let h_vec = h_v.clone();
    let h_len = h_vec.len();
    let len = vec_1.len();
    let l = vec_2.len();
    //let mut vec_temp_sub = vec_1[0].clone();
    //let mut vec_temp_add = vec_1[0].clone();
    //let mut vec_tamp = vec_1[0].clone();
    //let mut bd: f64;

    for k in 0..h_len {
        for r in 1..(n + 1) {
                 // Diviser vec_1 en chunks
            let chunk_size = (len + rayon::current_num_threads() - 1) / rayon::current_num_threads();
            let chunks: Vec<(usize, usize)> = (0..rayon::current_num_threads())
                .map(|i| (i * chunk_size, (i + 1) * chunk_size))
                .collect();

            // Traiter chaque chunk en parallèle
            let results: Vec<Vec<[i64; K_1]>> = chunks.into_par_iter().map(|(start, end)| {
                let mut local_vec_1 = vec_1[start..end.min(len)].to_vec();

                for i in 0..local_vec_1.len() {
                    let mut vec_temp_sub = local_vec_1[i].clone();
                    let mut vec_temp_add = local_vec_1[i].clone();
                    let vec_tamp = local_vec_1[i].clone();
                    let mut vec_two = vec_2.clone();
                    //let mut bd = (local_vec_1[i].iter().map(|x| (*x as i128).abs()).sum::<i128>() as f64).log2();
            let mut bd = local_vec_1[i].iter().map(|x| (*x as i128).abs()).sum::<i128>() ;
                    for j in (start + i + 1)..l {
                        let ds = find_a_twin_sub::<K, K_1>(&mut vec_temp_sub, &mut vec_two[j], h_vec[k], 0, (k + 1) * K / nb_chunck);
                        let da = find_a_twin_add::<K, K_1>(&mut vec_temp_add, &mut vec_two[j], h_vec[k], 0, (k + 1) * K / nb_chunck);

                        if ds < bd && ds < da {
                            bd = ds;
                            //println!("ds:{}, r:{} i:{} j:{} k:{}", bd.ilog2(),r,i,j,k);
                            println!("r:{} i:{} j:{} k:{}",r,i,j,k);
                local_vec_1[i].copy_from_slice(&vec_temp_sub);
                        } else if da < bd {
                            bd = da;
                            //println!("da:{}, r:{} i:{} j:{} k:{}", bd.ilog2(),r,i,j,k);
                            println!("r:{} i:{} j:{} k:{}",r,i,j,k);
                local_vec_1[i].copy_from_slice(&vec_temp_add);
                        }
                        vec_temp_sub.copy_from_slice(&vec_tamp);
                        vec_temp_add.copy_from_slice(&vec_tamp);
                    }
                }

                local_vec_1
            }).collect();

            // Fusionner les résultats dans vec_1
            let mut index = 0;
            for chunk in results {
                for item in chunk {
                    vec_1[index] = item;
                    index += 1;
                }
            }

            // Mettre à jour vec_2 avec vec_1 après chaque itération de r
            vec_2.copy_from_slice(vec_1);
        }
    }
}

// */

/*
fn reduce_by_twin<const K: usize,const K_1: usize>(
    vec_1: &mut Vec<[i64; K_1]>,
    vec_2: &mut Vec<[i64; K_1]>,
    n: usize,
    h_v: &[usize],
    nb_chunck: usize
)
{
    let h_vec = h_v.clone();
    let h_len = h_vec.len();
    let len = vec_1.len();
    let l = vec_2.len();
    let mut vec_temp_sub = vec_1[0].clone();
    let mut vec_temp_add = vec_1[0].clone();
    let mut vec_tamp = vec_1[0].clone();
    let mut bd: f64;
    for k in 0..h_len
    {
        for r in 1..(n+1)
        {
            for i in 0..(len-1)
            {
                vec_temp_sub.copy_from_slice(&vec_1[i]);
                vec_temp_add.copy_from_slice(&vec_1[i]);
                vec_tamp.copy_from_slice(&vec_1[i]);
                bd = (vec_1[i].iter().map(|x| (*x as i128).abs()).sum::<i128>() as f64).log2();
                for j in (i+1)..l
                {
                    let ds = find_a_twin_sub::<K,K_1>(&mut vec_temp_sub,&mut vec_2[j],h_vec[k],0,(k+1)*K/nb_chunck);
                    let da = find_a_twin_add::<K,K_1>(&mut vec_temp_add,&mut vec_2[j],h_vec[k],0,(k+1)*K/nb_chunck);
                    //println!("r:{} i:{} j:{} k:{}",r,i,j,k);
                    if ds < bd && ds <da
                    {
    
                        bd = ds;
                        println!("ds:{}",bd);
                        println!("r:{} i:{} j:{} k:{}",r,i,j,k);
                        vec_1[i].copy_from_slice(&vec_temp_sub);
                    } else if da < bd 
                    {
    
                        bd = da;
                        println!("da:{}",bd);
                        println!("r:{} i:{} j:{} k:{}",r,i,j,k);
                        vec_1[i].copy_from_slice(&vec_temp_add);
                    }
                    vec_temp_sub.copy_from_slice(&vec_tamp);
                    vec_temp_add.copy_from_slice(&vec_tamp);

                }
            }
            vec_2.copy_from_slice(&vec_1);
        }   
    }
}
// */


fn roll_subarray<const K: usize, const K_1: usize>(array: &mut [i64; K_1], start: usize, end: usize) -> i64 {
    let m = end - start;
    assert!(m <= K, "m must be less than or equal to K");

    // Sélectionner les m premiers éléments consécutifs
    let mut temp_array: Vec<i128> = array[start..end].iter().map(|&x| x as i128).collect();

    let mut best_distance = i128::MAX;
    let mut best_array = temp_array.clone();
    let mut final_mean = 0;

    for _ in 0..m {
        // Calculer la moyenne du tableau temporaire
        let mean: i128 = temp_array.iter().sum::<i128>() / (m as i128);

        // Calculer la distance à la moyenne
        let distance: i128 = temp_array.iter().map(|&x| (x - mean).abs()).sum();

        // Mettre à jour la meilleure distance et le tableau correspondant
        if distance < best_distance {
            best_distance = distance;
            best_array.copy_from_slice(&temp_array);
            final_mean = mean;
        }

        // Augmenter le plus petit coefficient de 2^64
        if let Some((index, _)) = temp_array.iter().enumerate().min_by_key(|&(_, &x)| x) {
            temp_array[index] += 1i128 << 64;
        }
    }

    // Définir un masque pour les i128 : 2^64 - 1
    let mod_mask = (1i128 << 64) - 1;

    // Mettre à jour les m premiers éléments du tableau original
    for (i, &value) in best_array.iter().enumerate() {
        array[i + start] = ((value - final_mean) & mod_mask) as i64;
    }

    // Réduire la moyenne modulo 2^64 avec un masque
    final_mean &= mod_mask;

    // Retourner la moyenne
    final_mean as i64
}


fn main() {
    // Taille du vecteur et des tableaux
    let n: usize = 30000; // Taille de l'échantillon
    const K: usize = 60; // dimension du vecteur lwe
    const K_1: usize = 61;

    //seuil de sélection
    let threshold = 65.5;
    let delta = 10.0;

    //Poids de hamming de chaque chunck
    let w = 10;
    let n_chunck = 3;
    const NC:usize = 3;
    let h_vec = [w;NC];


    //Poids de hamming total
    let h:usize = h_vec.iter().sum();

    //nombre de tours pour la recherche de jumeaux
    let n_round = 10;
    
    //q/h : <a+q/h,s> = <a,s> + q = <a,s>
    let q_h = (((1_i128<<64)as f64)/(h as f64)) as i64;

    // Distribution uniforme pour les valeurs a[i]
    let uniform_dist = Uniform::new(i64::MIN, i64::MAX).expect("Failed to create uniform distribution");

    // Distribution gaussienne pour l'erreur e
    let sigma = 0.0;
    let normal_dist = Normal::new(0.0, sigma).expect("Failed to create normal distribution");


    // Générer le tableau s de taille K avec un nombre spécifié de valeurs non nulles
    let mut s = vec![0; K];
    let step = K/n_chunck;
    let step_size = (K as u64)/(n_chunck as u64);
    for i in 0..n_chunck
    {
        let mut t = 0;
        while t < h_vec[i] 
        {
            let index = (rng().random::<u64>() % ( step_size )) as usize;
            if s[index+i*step] == 0
            {
                s[index + i*step] = 1;
                t+= 1;
            } 
        }
    }
 
    let mut vect: Vec<[i64; K + 1]> = (0..n)
        .into_par_iter() 
        .map(|_| {
            // Créer un générateur de nombres aléatoires pour chaque thread
            let mut rng = rand::rng();
            let mut array: [i64; K + 1] = [0; K + 1];

            // Générer les K premières valeurs a[i]
            for i in 0..K {
                array[i] = uniform_dist.sample(&mut rng);
            }


            // Calculer la somme pondérée avec une erreur gaussienne
            let sum: i64 = s.iter().zip(&array[..K]).map(|(&si, &ai)| si * ai).sum();
            let error: i64 = normal_dist.sample(&mut rng) as i64;
            let last_value = sum + error;

            // Ajouter la valeur calculée au tableau
            array[K] = last_value;

            array
        })
        .collect();
    
    let mut vect_2 = vect.clone();
    reduce_by_twin::<K,K_1>(&mut vect,&mut vect_2,n_round,&h_vec,n_chunck);
    
    //*
    let mut selected_vect: Vec<[i64; K + 1]> = vect.clone().into_par_iter()
        .filter(|&array|{ 
            let acc = array[..K].iter().map(|&x| x.abs() as i128).sum::<i128>() as i128;
            (acc as f64).log2() <= threshold
        })
        .collect();    
    // */

    let mut b:Vec<i128> = vec![0;K];
    let mut moyenne_a = vec![0i128;K];
    
    let len = selected_vect.len();
    for j in 0..len
    {   
        moyenne_a.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{ *x= *x + (*y).abs() as i128 } );
        b.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{*x= *x +(((*y).signum())*selected_vect[j][K]) as i128 } );    
    }
    
    println!();

    let mut max=moyenne_a[0];
    moyenne_a.iter_mut().for_each(|x|{if *x > max{max = *x};});
    let moy_b:i128 = (b.iter().sum::<i128>())/(K as i128); 
    b.iter_mut().for_each(|x|{*x = *x -moy_b ;println!("b {}",*x)});
    //*
    b.iter_mut().zip(moyenne_a.iter_mut()).for_each(|(x,y)|{
        if *y != 0
        {
            let temp= (*x)*((max)/(*y));
            if (*x) > 0 && temp < 0 || (*x) < 0 && temp > 0
            {
                println!("parité{}",*x);
            }
            *x = ((*x as f64)*((max as f64)/(*y as f64))) as i128;
        }
    });

    // */
    let mut r:Vec<f64> = vec![0.0;K];

    println!();
    println!(" faux s: "); 
    
    for i in 0..K
    {
        r[i]= (b[i] as f64)/((K*len) as f64) ;  //-0.5;
        println!(" {} :{} ",i+1,r[i]);
    }

    let mut moy0:f64 =0.0;
    let mut moy1:f64 =0.0;
    let mut standard_dev=0.0;
    let mut compt0:f64 =0.0;
    let mut compt1:f64 =0.0;
    for i in 0..K{
        if s[i] == 1 {
            moy1+=r[i];
            compt1+=1.0;
        }else{
            moy0+=r[i];
            compt0+=1.0;
        }         
    }

    standard_dev = standard_dev/(K as f64-1.0);
    standard_dev = standard_dev.sqrt();
    let moymoy =-0.5;
    moy0=moy0/compt0;
    moy1=moy1/compt1;    
    println!("moyenne des 1:{}",moy1);
    println!("moyenne des 0:{}",moy0);
    println!("moymoy       :{}",moymoy);
    println!("standard dev :{}",standard_dev);
    println!();
    let mut round_key:Vec<i64> = vec![0;K];
    let mut round_key_2:Vec<i64> = vec![0;K];
        for i in 0..K{ 
            if r[i] < 0.0{
                round_key[i]=0;    
            }
            else{
                round_key[i]=r[i].round() as i64;    
            }
            if round_key[i] > 1{
                round_key[i]=1;
            }        
        }
        for i in 0..K{ 
            if r[i]-moymoy < 0.0{
                round_key_2[i]=0;    
            }
            else{
                round_key_2[i]=(r[i]-moymoy).round() as i64;    
            }
            if round_key_2[i] > 1{
                round_key_2[i]=1;
            }        
        }       

    println!();
    print!("   indice   :");
    for i in 0..K{
        if i+1<10{
            print!(" {} ",i+1);    
        }else{
            print!("{} ",i+1);
        }
        
    }
    println!();
    print!("  round_key :");
    for i in 0..K{
        print!(" {} ",round_key[i]);
    }
    println!();
    println!();
    println!();
    print!("round_key_2 :");
    for i in 0..K{
        print!(" {} ",round_key_2[i]);
    }
    println!();
    println!();
    
    print!("     vrai s :");
    for i in 0..K{
        print!(" {} ",s[i]);
    }
    
    println!();
    println!();
    
    print!("differences :");
    for i in 0..K{
        if round_key[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!();
    
    print!("differences2:");
    for i in 0..K{
        if round_key_2[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!("compteur:{}",len);
    println!("nombre d'echantillons:{}",n);

}

// */


Next step is 100 bits.

For 2000 samples and a 40 bits secret key with 16 non zero bits, a threshold of 65.8, here is the type of result you should expect:

image Detailed Explanations after the code.

use rand::Rng;
use rand::rng;
use rand_distr::{Uniform, Normal, Distribution};
use rayon::prelude::*;


//*
fn linear_combination_8<const K: usize, const K_1: usize>(
    array_1: &mut [i64; K_1],
    array_2: &mut [i64; K_1],
    h: usize,
    h1: usize,
    h2: usize,
    h3: usize,
    h4: usize,
    h5: usize,
    h6: usize,
    h7: usize,
    h8: usize,
) -> f64 {
    let mut temp_array_1 = array_1.clone();
    let mut temp_array_1_2 = array_1.clone();
    let temp_array_2 = array_2.clone();

    let mut best_distance = temp_array_1[..K].iter().map(|&x| x.abs() as i128).sum::<i128>();
    let mut best_array = temp_array_1.clone();
    let mut best_mean = 0;

    temp_array_1.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x -= *y);
    temp_array_1_2.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x += *y);

    let means_1 = [
        roll_subarray::<K, K_1>(&mut temp_array_1, 0, K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1, K / 8, K / 4),
        roll_subarray::<K, K_1>(&mut temp_array_1, K / 4, 3 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1, 3 * K / 8, K / 2),
        roll_subarray::<K, K_1>(&mut temp_array_1, K / 2, 5 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1, 5 * K / 8, 3 * K / 4),
        roll_subarray::<K, K_1>(&mut temp_array_1, 3 * K / 4, 7 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1, 7 * K / 8, K),
    ];

    let means_2 = [
        roll_subarray::<K, K_1>(&mut temp_array_1_2, 0, K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, K / 8, K / 4),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, K / 4, 3 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, 3 * K / 8, K / 2),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, K / 2, 5 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, 5 * K / 8, 3 * K / 4),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, 3 * K / 4, 7 * K / 8),
        roll_subarray::<K, K_1>(&mut temp_array_1_2, 7 * K / 8, K),
    ];

    let h_vec = [h1, h2, h3, h4, h5, h6, h7, h8];

    temp_array_1[K] -= means_1.iter().zip(&h_vec).map(|(&mean, &h)| mean * (h as i64)).sum::<i64>();
    temp_array_1_2[K] -= means_2.iter().zip(&h_vec).map(|(&mean, &h)| mean * (h as i64)).sum::<i64>();

    let gmean_1 = roll_subarray::<K, K_1>(&mut temp_array_1, 0, K);
    let gmean_2 = roll_subarray::<K, K_1>(&mut temp_array_1_2, 0, K);
    let distance: i128 = temp_array_1[..K].iter().map(|&x| x.abs() as i128).sum();
    let distance_2: i128 = temp_array_1_2[..K].iter().map(|&x| x.abs() as i128).sum();

    if distance < best_distance {
        best_distance = distance;
        best_array = temp_array_1.clone();
        best_mean = gmean_1;
    } else if distance_2 < best_distance {
        best_distance = distance_2;
        best_array = temp_array_1_2.clone();
        best_mean = gmean_2;
    }

    let dee = (best_distance as f64).log2();
    println!("out{}", dee);

    *array_1 = best_array.clone();
    array_1[K] -= (h as i64) * best_mean;
    dee
}
// */

fn roll_subarray<const K: usize, const K_1: usize>(array: &mut [i64; K_1], start: usize, end: usize) -> i64 {
    let m = end - start;
    assert!(m <= K, "m must be less than or equal to K");

    // Sélectionner les m premiers éléments consécutifs
    let mut temp_array: Vec<i128> = array[start..end].iter().map(|&x| x as i128).collect();

    let mut best_distance = i128::MAX;
    let mut best_array = temp_array.clone();
    let mut final_mean = 0;

    for _ in 0..m {
        // Calculer la moyenne du tableau temporaire
        let mean: i128 = temp_array.iter().sum::<i128>() / (m as i128);

        // Calculer la distance à la moyenne
        let distance: i128 = temp_array.iter().map(|&x| (x - mean).abs()).sum();

        // Mettre à jour la meilleure distance et le tableau correspondant
        if distance < best_distance {
            best_distance = distance;
            best_array.copy_from_slice(&temp_array);
            final_mean = mean;
        }

        // Augmenter le plus petit coefficient de 2^64
        if let Some((index, _)) = temp_array.iter().enumerate().min_by_key(|&(_, &x)| x) {
            temp_array[index] += 1i128 << 64;
        }
    }

    // Définir un masque pour les i128 : 2^64 - 1
    let mod_mask = (1i128 << 64) - 1;

    // Mettre à jour les m premiers éléments du tableau original
    for (i, &value) in best_array.iter().enumerate() {
        array[i + start] = ((value - final_mean) & mod_mask) as i64;
    }

    // Réduire la moyenne modulo 2^64 avec un masque
    final_mean &= mod_mask;

    // Retourner la moyenne
    final_mean as i64
}


//*
// main multiple hamming weight
fn main() {
    // Taille du vecteur et des tableaux
    let n: usize = 2000; // Taille de l'échantillon
    const K: usize = 56; // dimension du vecteur lwe
    const K_1: usize = 57;

    //seuil de sélection
    let mut threshold = 65.7;


    //Poids de hamming de chaque chunck
    let hamming_1 = 3; 
    let hamming_2 = 3;
    let hamming_3 = 3;     
    let hamming_4 = 3;
    let hamming_5 = 3;
    let hamming_6 = 3;
    let hamming_7 = 3;
    let hamming_8 = 3;

    //Poids de hamming total
    let hamming = hamming_1+hamming_2+hamming_3+hamming_4+hamming_5+hamming_6+hamming_7+hamming_8;

    
    //q/h : <a+q/h,s> = <a,s> + q = <a,s>
    let q_h = (((1_i128<<64)as f64)/(hamming as f64)) as i64;

    // Distribution uniforme pour les valeurs a[i]
    let uniform_dist = Uniform::new(i64::MIN, i64::MAX).expect("Failed to create uniform distribution");

    // Distribution gaussienne pour l'erreur e
    let sigma = 10.0;
    let normal_dist = Normal::new(0.0, sigma).expect("Failed to create normal distribution");


    // Générer le tableau s de taille K avec un nombre spécifié de valeurs non nulles
    let mut s = vec![0; K];
    let mut t = 0;
    let step = K/8;
    while t < hamming_1 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index] == 0
        {
            s[index] = 1;
            t+= 1;
        } 
    }
    t = 0;
    while t < hamming_2 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+step] == 0
        {
            s[index+step] = 1;
            t+= 1;
        } 
    }
    //*
    t = 0;
    while t < hamming_3 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+2*step] == 0
        {
            s[index+2*step] = 1;
            t+= 1;
        } 
    }
    t = 0;
    while t < hamming_4 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+3*step] == 0
        {
            s[index+3*step] = 1;
            t+= 1;
        } 
    }
    t = 0;
    while t < hamming_5 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+4*step] == 0
        {
            s[index+4*step] = 1;
            t+= 1;
        } 
    }
    t = 0;
    while t < hamming_6 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+5*step] == 0
        {
            s[index+5*step] = 1;
            t+= 1;
        } 
    }
    t = 0;
    while t < hamming_7 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+6*step] == 0
        {
            s[index+6*step] = 1;
            t+= 1;
        } 
        
    }
    t = 0;
    while t < hamming_8 {
        let index = (rng().random::<u64>() % ((K as u64)/8)) as usize;
        if s[index+7*step] == 0
        {
            s[index+7*step] = 1;
            t+= 1;
        } 
    }
    // */
 
    let mut vect: Vec<[i64; K + 1]> = (0..n)
        .into_par_iter() 
        .map(|_| {
            // Créer un générateur de nombres aléatoires pour chaque thread
            let mut rng = rand::rng();
            let mut array: [i64; K + 1] = [0; K + 1];

            // Générer les K premières valeurs a[i]
            for i in 0..K {
                array[i] = uniform_dist.sample(&mut rng);
            }


            // Calculer la somme pondérée avec une erreur gaussienne
            let sum: i64 = s.iter().zip(&array[..K]).map(|(&si, &ai)| si * ai).sum();
            let error: i64 = normal_dist.sample(&mut rng) as i64;
            let last_value = sum + error;

            // Ajouter la valeur calculée au tableau
            array[K] = last_value;

            array
        })
        .collect();
    
    //minimize the variance to a 0 mean
    //vect.iter_mut().for_each(|x| add_q_h_to_array::<K,K_1>(x, q_h, hamming) );
    

    let l = vect.len();
    let mut vect_2 = vect.clone();

    let thresholds = [66.5, 66.3, 66.1, 65.9, 65.7, 65.5, 65.3, 65.2, 64., 64.8];
    let round_names = [
        "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth", "tenth"
    ];
    let mut v_temp = vect.clone();
    let mut v_tamp = vect.clone();
    for (round, threshold) in thresholds.iter().enumerate() {
        println!("{} round !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!", round_names[round]);

        for i in 0..l-1 {
            let mut tmp: f64 = (i64::MAX) as f64; 
            let mut bd: f64 = (i64::MAX) as f64;
            v_tamp[i].copy_from_slice(&vect[i]);
            for j in i+1..l {
                if bd >= *threshold {
                    println!("new linear combination {}", round + 1);
                    println!("bd:{}", bd);
                    println!("i:{},j:{}", i, j);
                    tmp = linear_combination_8::<K, K_1>(
                        &mut v_temp[i],
                        &mut vect_2[j],
                        hamming,
                        hamming_1,
                        hamming_2,
                        hamming_3,
                        hamming_4,
                        hamming_5,
                        hamming_6,
                        hamming_7,
                        hamming_8,
                    );
                    if tmp < bd
                    {
                        bd = tmp;
                        vect[i].copy_from_slice(&v_temp[i]);
                        v_tamp[i].copy_from_slice(&v_temp[i]);
                    };
                    v_temp[i].copy_from_slice(&v_tamp[i]);
                }
            }
        }

        vect_2 = vect.clone();
    }
     //*
    let mut selected_vect: Vec<[i64; K + 1]> = vect.clone().into_par_iter()
        .filter(|&array|{ 
            let acc = array[..K].iter().map(|&x| x.abs() as i128).sum::<i128>() as i128;
            (acc as f64).log2() <= threshold
        })
        .collect();    
    // */

    let mut b:Vec<i128> = vec![0;K];
    let mut moyenne_a = vec![0i128;K];
    
    let len = selected_vect.len();
    for j in 0..len
    {   
        moyenne_a.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{ *x= *x + (*y).abs() as i128 } );
        b.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{*x= *x +(((*y).signum())*selected_vect[j][K]) as i128 } );    
    }
    
    println!();

    let mut max=moyenne_a[0];
    moyenne_a.iter_mut().for_each(|x|{if *x > max{max = *x};});
    let moy_b:i128 = (b.iter().sum::<i128>())/(K as i128); 
    b.iter_mut().for_each(|x|{*x = *x -moy_b ;println!("b {}",*x)});
    //*
    b.iter_mut().zip(moyenne_a.iter_mut()).for_each(|(x,y)|{
        if *y != 0
        {
            let temp= (*x)*((max)/(*y));
            if (*x) > 0 && temp < 0 || (*x) < 0 && temp > 0
            {
                println!("parité{}",*x);
            }
            *x = ((*x as f64)*((max as f64)/(*y as f64))) as i128;
        }
    });

    // */
    let mut r:Vec<f64> = vec![0.0;K];

    println!();
    println!(" faux s: "); 
    
    for i in 0..K
    {
        r[i]= (b[i] as f64)/((K*len) as f64) ;  //-0.5;
        println!(" {} :{} ",i+1,r[i]);
    }

    let mut moy0:f64 =0.0;
    let mut moy1:f64 =0.0;
    let mut standard_dev=0.0;
    let mut compt0:f64 =0.0;
    let mut compt1:f64 =0.0;
    for i in 0..K{
        if s[i] == 1 {
            moy1+=r[i];
            compt1+=1.0;
        }else{
            moy0+=r[i];
            compt0+=1.0;
        }         
    }

    standard_dev = standard_dev/(K as f64-1.0);
    standard_dev = standard_dev.sqrt();
    let moymoy =-0.5;
    moy0=moy0/compt0;
    moy1=moy1/compt1;    
    println!("moyenne des 1:{}",moy1);
    println!("moyenne des 0:{}",moy0);
    println!("moymoy       :{}",moymoy);
    println!("standard dev :{}",standard_dev);
    println!();
    let mut round_key:Vec<i64> = vec![0;K];
    let mut round_key_2:Vec<i64> = vec![0;K];
        for i in 0..K{ 
            if r[i] < 0.0{
                round_key[i]=0;    
            }
            else{
                round_key[i]=r[i].round() as i64;    
            }
            if round_key[i] > 1{
                round_key[i]=1;
            }        
        }
        for i in 0..K{ 
            if r[i]-moymoy < 0.0{
                round_key_2[i]=0;    
            }
            else{
                round_key_2[i]=(r[i]-moymoy).round() as i64;    
            }
            if round_key_2[i] > 1{
                round_key_2[i]=1;
            }        
        }       

    println!();
    print!("   indice   :");
    for i in 0..K{
        if i+1<10{
            print!(" {} ",i+1);    
        }else{
            print!("{} ",i+1);
        }
        
    }
    println!();
    print!("  round_key :");
    for i in 0..K{
        print!(" {} ",round_key[i]);
    }
    println!();
    println!();
    println!();
    print!("round_key_2 :");
    for i in 0..K{
        print!(" {} ",round_key_2[i]);
    }
    println!();
    println!();
    
    print!("     vrai s :");
    for i in 0..K{
        print!(" {} ",s[i]);
    }
    
    println!();
    println!();
    
    print!("differences :");
    for i in 0..K{
        if round_key[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!();
    
    print!("differences2:");
    for i in 0..K{
        if round_key_2[i]-(s[i] as i64)==0{
           print!(" V "); 
        }else{
           print!(" X "); 
        }
    }
    println!();
    println!("compteur:{}",len);
    println!("nombre d'echantillons:{}",n);

}

// */

Optimisation

Now if you happen to know the hamming weight of the first half of your lwe secret key and the hamming weight of the second half, it is possible to decrease furthermore the difficulty of finding a valid sample.

linear combinations

Using the circular mean optimisation, we can create a first selection of samples with a distance slightly smaller than without the optimisation. The circular mean optimisation changes (slightly) the distribution.

From that new set, we can create a new set made of linear combinations of samples belonging to the first set, then apply the circular mean again and select a set of samples where the distance to 0 is smaller once again. Repeating the operation again and again decreases the distance, while it increases the noise. (calculations to be made)

Using linear combinations instead of sieving millions of samples represent two advantages for the cryptographer. First, the limited number of source to be intercepted having the same key is quite important. Secondly samples having a smaller total distance (as the sum of absolute values of all the slots) combined together create new samples, a lot of which will have a smaller distance.

adding or subtracting q/h

Adding or subtracting q/h several times to each also has the potential to reduce the distance, but all the trials I made showed no differences. It should create no difference only if the number of positive is equal to the number of negative .

comparison to sieving

isometries

Isometries should also have the potential to find new samples at the cost of changing the key, but those samples would not be more frequent. If the isometry (or a change of basis system) happens to reduce the new key to only one component with a size of the hamming weight , then the corresponding component should have a value:

as long as if not, then was too big and a reduction happened

No comment found.

Add a comment

You must log in to post a comment.