Edit: new version
Here is a humble (and shy) attack on lwe with a binary secret and a gaussian error. It is inspired by the BKW attack, but it needs much less samples. The basic idea behind it is that all samples can be useful and provide information, you just need to know how to look at them under the right POV.
We present two codes, we will start with a simple attack on few bits and then we discuss an improvement to the first code.
To run them:
Install rust
in the Cargo.toml file, add
[dependencies] rand = "0.9.0" rand_distr = "0.5.1" rayon = "1.10.0"
cargo run –release
Let’s start with the basic idea.
Basic principle
LWE is a a mathematical problem I will not detail too much here, you just need to know that it is at the core of several post quantum crypto system recently proposed for the reason that, at the time of writing those lines, there are no known quantum algorithm that could break them.
To sum it up (way too) quickly the LWE problem is a set of linear equations of size where all coefficients are being sampled uniformly, ( in theory, the problem is not defined over a ring, but in real life, it is). The collection of coefficients can be seen as a vector of . The secret is also a vector of , thus each linear equation can be seen as a scalar product between two vector of having their result in . will be called the secret key, because it usually constitutes the core of the private key for all crypto system relying on LWE.
Solving that system is easy, but the LWE problem modifies it slightly such that it becomes almost impossible to find the solution, unless becomes very large. Adding a small random error to each , leads to one of the toughest cryptographic problem to solve.
Problems in cryptography are often not very complicated, but they are hard. It is easy to find a way to solve LWE, yet it is very expensive in terms of computation to do so.
In an lwe sample, or its ring version, the last term is calculated as follow:
Usually follows a (discrete) gaussian distribution, but it is not mandatory. In cryptographic systems relying on LWE, the key is often taken in and not in . The attack presented heavily relies on that asumption.
I suggest a method vaguely inspired by an attack on LWE called BKW. You can also see some similarity with a method used in astronomy for getting clear pictures from a set of blurry ones. The idea is to pile up many results and recover a signal by accumulating a deviation from the mean while the noise averages to zero. We create samples following an LWE (like) distribution, each of which is of size , the first terms are the s and the last term is .
In order to perform the attack we want a guarantee that the scalar product calculated in is in the interval . By that I mean
What I mean by that is that the scalar product does not experience any reduction modulo , its value in is equal to its value in . If we were to know in which representative range in lies in for all samples, then the problem would be easy to solve. We can have a great certitude that the scalar product lies in by selecting samples where the values of each is so small that it becomes very likely for the sum of the to land in .
For example, a radical solution to achieve such a feat is to set a limit on each such that the . Then the limit has the value:
Such a drastic constraint leads to very few samples being accepted, the probability of finding a random sample verifying this is
which is worse than exponential.
We can already increase the number of acceptable samples by observing that on average, only half of the will participate to the scalar product. We increase the number of accepted samples by taking the hamming weight instead of . The hamming weight is the number of non zero bits of , thus . We usually have . Any large deviation of from can be used by the attack to recover the secret key more easily.
the probability becomes
which is still terrible but much higher.
If we set a limit for each we miss all the sample which "almost made it". Some samples have coefficients all below the limit except for few of them. They can certainly be of statistical usefulness.
The constraint we use instead is that the sum of the must be below a given threshold and not each component. Given that the negative terms often compensate for the positive terms and vice versa, having a sample that should be accepted is much more likely than with the previous constraint.
Let’s introduce the polynomial coefficients.
If where all the follows a uniform distribution , then the probability
where
is a polynomial coefficient, i.e. coefficient in front of when in the expansion of
It’s a special case of a multinomial coefficient (or less fuzzily the sum of multinomial coefficient).
For our interval we can define a new variable , then
because of the central limit theorem, when is big enough, that probability can be approximated fairly by a gaussian distribution of expected value ,
So what is the approximate probability that a random sample lies in the interval?
For this is . Approximately one third of all samples lie in . The problem is that our criterion on the sum of the can not pick up all samples landing in that interval.
On average , a reasonable heuristic is thus to have:
For this gives
But we already said on average, only half of the would be used so:
and
For , this gives an threshold actually lower than by using , .
In practice though, we recover lot of signal by setting a threshold around .
Back to our first idea to find the probability, we define new random variables for the following a uniform distribution . Then, if , then the probability to find a valid sample among random samples is
With and
For and , this gives a proability
Over 2000000 samples, we should recover approximately 22000. This is absolutely not the number of valid samples that we find, so there might be a huge mistake somewhere.
algorithm
So far we defined an appropriate interval the probability that one appropriate sample exist and a method to have good chance of finding one, but we did not discuss how we would be able to benefit from those findings. Let’s imagine we picked up samples that are guaranteed to lie in with a good probability .
The sample mean calculated in should evaluate to:
We can define a second mean where the term has been removed, and it should also evaluate to:
So that
If is positive and the scalar product lies in the interval the term contributes “positively“ to the scalar product, and negatively otherwise, the rest of the scalar product contributes on average for (zero would be more convenient, but the modulus taken is even).
This is the principle behind the treatment of samples:
circular mean
The distribution being uniform, the samples whose scalar product (calculated in ) guaranteed to be in , become rarer and rarer when grows.
But we can recover some information knowing or assuming the hamming weight of the sample. Arguably, there is more information provided than by just reducing the key of 1 bit. At least, the information provided by the hamming weight weakens the secret for all its bits.
If a valid lwe sample can be written:
then if is a constant,
is also a valid sample.
then
Thus, to get a new valid sample, it is only needed to remove from and the constant from each . Some constant allow to lower the disance to 0, and or to balance the sample to the best. Thus, knowing the hamming weight allows to transform the sample. The hamming weight should in theory only remove one degree of freedom from the system, which means it should be equivalent to removing one bit from the secret key. It seems instead that with that techniqe, it can provide some information on all bits of the key.
Let’s illustrate the idea by a small example with an error of . We choose the modulus to be with representatives in , , the secret key if
the calculation of is detail is given by:
then the "mean" of all the is the sample is not "centered". In addition, the distance to 0 is , the distance to its mean is .
Now imagine that we know the hamming weight, then we can search for a smaller distance to the mean because we can transform .
To do so, we search for other representatives, compute the mean in that representative system and see if the distance to the mean is smaller than the distance we started with, here 24.
With respective means of , , , , and distances to the mean of
The smallest distance to the mean is obtained by the third choice of representative whose mean is . So let’s ignore the fact that the mean is not an integer, and let’s take 8 as a mean. We now remove to all the
Its distance to 0 is .
And to get a valid sample, we also have to remove from , where h is the hamming weight of the sample, here 3.
The new sample is thus
We computed a sort of circular mean on the ring, and we centered the sample around that mean. The transformation on is performed such that the sample is still a valid lwe sample and we modified using the hamming weight.
:
:
:
After subtracting 8 to all the
:
Hereafter is a first simple version of the algorithm that works for small keys, knowing only the total hamming weight of the key.
Usually, the smaller the hamming weight of the key, the better is the signal, or the most likey the algorithm is able to recover the key.
If a key has a high hamming weight, then it is as good as if it had a small hamming weight. We can change the key to get a signal with a small hamming weight by removing from .
For example, a key of size having a hamming weight of is as difficult to find than a key of size with a hamming weight of because
where
Thus, the worst situation happens really when the hamming weight is exactly .
For 2000000 samples, with a noise having a standard deviation of 50 and a threshold of which means samples are selected if , you should get a result looking like that:
V is for a match between the real key and the key found, and X is for a mismatch. Lowering the threshold gives better results, but you need more samples.
use rand::Rng;
use rand::rng;
use rand_distr::{Uniform, Normal, Distribution};
use rayon::prelude::*;
use std::convert::TryInto;
fn roll_subarray<const K: usize,const K_1: usize>(array: &mut [i64; K_1], start: usize,end: usize) -> i64 {
let m = end - start;
assert!(m <= K, "m must be less than or equal to K");
// Sélectionner les m premiers éléments consécutifs
let mut temp_array: Vec<i128> = array[start..end].par_iter().map(|&x| x as i128).collect();
let mut best_distance = i128::MAX;
let mut best_array = temp_array.clone();
let mut final_mean = 0;
for _ in 0..m {
// Calculer la moyenne du tableau temporaire
let mean: i128 = temp_array.par_iter().sum::<i128>() / (m as i128);
// Calculer la distance à la moyenne
let distance: i128 = temp_array.par_iter().map(|&x| (x - mean).abs()).sum();
// Mettre à jour la meilleure distance et le tableau correspondant
if distance < best_distance {
best_distance = distance;
best_array.copy_from_slice(&temp_array);
final_mean = mean;
println!("{}",(best_distance as f64).log2())
}
// Augmenter le plus petit coefficient de 2^64
if let Some((index, _)) = temp_array.par_iter().enumerate().min_by_key(|&(_, &x)| x) {
temp_array[index] += 1i128 << 64;
}
}
// définir un masque pour les i128 : 2^64 -1
let mod_mask = (1i128 << 64) - 1;
// Mettre à jour les m premiers éléments du tableau original
for (i, &value) in best_array.iter().enumerate() {
array[i+start] = ((value -final_mean) & mod_mask) as i64;
}
// Réduire le tableau et la moyenne modulo 2^64 avec un masque
final_mean = final_mean & mod_mask;
// Retourner la moyenne
final_mean as i64
}
//main single hamming weight
//*
fn main() {
// Taille du vecteur et des tableaux
let n: usize = 4_000_000; // Taille de l'échantillon
const K: usize = 20; // dimension du vecteur lwe
const K_1: usize = 21;
let hamming = 10; // Nombre de valeurs non nulles dans s
// Distribution uniforme pour les valeurs a[i]
let uniform_dist = Uniform::new(i64::MIN, i64::MAX).expect("Failed to create uniform distribution");
// Distribution gaussienne pour l'erreur e
let sigma = 50.0;
let normal_dist = Normal::new(0.0, sigma).expect("Failed to create normal distribution");
// Générer le tableau s de taille K avec un nombre spécifié de valeurs non nulles
let mut s = vec![0; K];
let mut t = 0;
while t < hamming {
let index = (rng().random::<u64>() % (K as u64)) as usize;
if s[index] == 0
{
s[index] = 1;
t+= 1;
}
}
let mut vect: Vec<[i64; K + 1]> = (0..n)
.into_par_iter() // Utilisation de rayon pour paralléliser
.map(|_| {
// Créer un générateur de nombres aléatoires pour chaque thread
let mut rng = rand::rng();
let mut array: [i64; K + 1] = [0; K + 1];
// Générer les K premières valeurs a[i]
for i in 0..K {
array[i] = uniform_dist.sample(&mut rng);
}
// Calculer la somme pondérée avec une erreur gaussienne
let sum: i64 = s.iter().zip(&array[..K]).map(|(&si, &ai)| si * ai).sum();
let error: i64 = normal_dist.sample(&mut rng) as i64;
let last_value = sum + error;
// Ajouter la valeur calculée au tableau
array[K] = last_value;
array
})
.collect();
// Moyenne circulaire
//*
vect.par_iter_mut().for_each(|x|{
let mean = roll_subarray::<K,K_1>( x, 0, K);
x[K] = x[K] - mean*hamming;
});
// */
// Fonction pour sélectionner les tableaux en parallèle
let threshold = 65.3;
let mut selected_vect: Vec<[i64; K + 1]> = vect.into_par_iter()
.filter(|&array|{
let acc = array[..K].iter().map(|&x| x.abs() as i128).sum::<i128>() as i128;
(acc as f64).log2() < threshold
})
.collect();
let mut b:Vec<i128> =vec![0;K];
let mut moyenne_a = vec![0i128;K];
let len = selected_vect.len();
for j in 0..len
{
moyenne_a.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{ *x= *x + (*y).abs() as i128 } );
b.iter_mut().zip(selected_vect[j].iter()).for_each(|(x,y)|{*x= *x +(((*y).signum())*selected_vect[j][K]) as i128 } );
}
println!();
let mut max=moyenne_a[0];
moyenne_a.iter_mut().for_each(|x|{if *x > max{max = *x};});
let moy_b:i128 = (b.iter().sum::<i128>())/(K as i128);
b.iter_mut().for_each(|x|{*x = *x -moy_b ;println!("b {}",*x)});
b.iter_mut().zip(moyenne_a.iter_mut()).for_each(|(x,y)|{
if *y != 0
{
let temp= (*x)*((max)/(*y));
if (*x) > 0 && temp < 0 || (*x) < 0 && temp > 0
{
println!("parité{}",*x);
}
*x = ((*x as f64)*((max as f64)/(*y as f64))) as i128;
}
});
let mut r:Vec<f64> = vec![0.0;K];
println!();
println!(" faux s: ");
for i in 0..K
{
r[i]= (b[i] as f64)/((K*len) as f64) ; //-0.5;
println!(" {} :{} ",i+1,r[i]);
}
let mut moy0:f64 =0.0;
let mut moy1:f64 =0.0;
let mut standard_dev=0.0;
let mut compt0:f64 =0.0;
let mut compt1:f64 =0.0;
for i in 0..K{
if s[i] == 1 {
moy1+=r[i];
compt1+=1.0;
}else{
moy0+=r[i];
compt0+=1.0;
}
}
standard_dev = standard_dev/(K as f64-1.0);
standard_dev = standard_dev.sqrt();
let moymoy =-0.5;
moy0=moy0/compt0;
moy1=moy1/compt1;
println!("moyenne des 1:{}",moy1);
println!("moyenne des 0:{}",moy0);
println!("moymoy :{}",moymoy);
println!("standard dev :{}",standard_dev);
println!();
let mut round_key:Vec<i64> = vec![0;K];
let mut round_key_2:Vec<i64> = vec![0;K];
for i in 0..K{
if r[i] < 0.0{
round_key[i]=0;
}
else{
round_key[i]=r[i].round() as i64;
}
if round_key[i] > 1{
round_key[i]=1;
}
}
for i in 0..K{
if r[i]-moymoy < 0.0{
round_key_2[i]=0;
}
else{
round_key_2[i]=(r[i]-moymoy).round() as i64;
}
if round_key_2[i] > 1{
round_key_2[i]=1;
}
}
println!();
print!(" indice :");
for i in 0..K{
if i+1<10{
print!(" {} ",i+1);
}else{
print!("{} ",i+1);
}
}
println!();
print!(" round_key :");
for i in 0..K{
print!(" {} ",round_key[i]);
}
println!();
println!();
println!();
print!("round_key_2 :");
for i in 0..K{
print!(" {} ",round_key_2[i]);
}
println!();
println!();
print!(" vrai s :");
for i in 0..K{
print!(" {} ",s[i]);
}
println!();
println!();
print!("differences :");
for i in 0..K{
if round_key[i]-(s[i] as i64)==0{
print!(" V ");
}else{
print!(" X ");
}
}
println!();
println!();
print!("differences2:");
for i in 0..K{
if round_key_2[i]-(s[i] as i64)==0{
print!(" V ");
}else{
print!(" X ");
}
}
println!();
println!("compteur:{}",len);
println!("nombre d'echantillons:{}",n);
}
// */
Parsing millions of samples is a bit tedious and the number of valid samples decreases rapidly when increases. To increase the size of the key we can recover, we can use two complementary tricks.
The first is linear combinations.
Making linear combinations between samples which have "small" components leads to samples which have increasingly smaller components. The idea is then to select the few samples which have very small components, and then combine them together. By small we mean that their distribution over a ring does not end being uniform by "overlapping".
The second trick we use is assuming we have a partial information on the key. We assume we know the hamming weights of chuncks of arbitrary size. In a real life situation, instead of assuming we know them, we can just try all the possible hamming weigts of each chunck. For example, if the chuncks have a size of 20, then the key can have either 0,1,2 or ...20 non zero bits, thus in a real life situation, we can just try all the possible hamming weights, and apply the same algorithm. That brute force method would multiply the duration of the calculation by .
In practice though, the bigger the chunck is, the smaller is the relative standard deviation of the number of non zero bits in that chunk. If the chunck has a size of 20 bits, you can expect that approximately 60% of the hamming weights to be between 7.77 and 12.23, and 95% of them to be between 5.54 and 14.46.
Dividing the key in chunks of aritrary size allows the number of valid samples to increase by a lot.
In the code, the function that finds the circular mean is called "roll_subarray". We apply the "roll_subarray" function to chuncks of size 20 for each sample. The function is linear in with n the sie of the key. Some parallelism might reduce that but each key chunck is too small to suffer the overhead provoked by parallelism.
The roll_subarray finds the circular mean of each chunk and center all the sample values around that mean.
The function allows to slightly transform the distribution, then, we can make linear combinations between transformed samples to increase the likelihood of finding "small" samples little by little.
We start with 40 000 samples, and a threshold of . We recovered 55 bits of the key, to recover 60 bits, the algo needs to start with more samples and or have a lower threshold. Let’s go higher.
twin samples
By construction, the algorithm manages to benefit from "twin samples". Twin samples are samples whose components can be very "far" from each other, but have the same variations. They are much more frequent than samples having just small components. For example, with the modulus and representatives in and two samples, and , knowing the hamming weight, we can change it to .
The new algorithm is described at high level here:
The idea is to first reduce the "distance" of the first chunck by linear combination until it becomes very small. Since we do not look at the other components, they keep being distributed with a uniform distribution.
Then, the same method is applied to the first two chuncks, then to the first three chunks etc...
Now if you happen to know the hamming weight of the first half of your lwe secret key and the hamming weight of the second half, it is possible to decrease furthermore the difficulty of finding a valid sample.
Here is the algorithm:
// */
use rand::Rng;
use rand::rng;
use rand_distr::{Uniform, Normal, Distribution};
use rayon::prelude::*;
fn find_a_twin_sub<const K: usize, const K_1: usize>(
array_1: &mut [i64; K_1],
array_2: &mut [i64; K_1],
hi: usize,
start: usize,
end: usize
) -> i128 {
let mut temp_array_1 = *array_1;
let temp_array_2 = *array_2;
let mut best_distance = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum::<i128>();
temp_array_1.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x -= *y);
let mean = roll_subarray::<K, K_1>(&mut temp_array_1, start, end);
temp_array_1[K] -= mean * (hi as i64);
let distance: i128 = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum();
if distance < best_distance {
best_distance = distance;
*array_1 = temp_array_1;
}
best_distance
}
fn find_a_twin_add<const K: usize, const K_1: usize>(
array_1: &mut [i64; K_1],
array_2: &mut [i64; K_1],
hi: usize,
start: usize,
end: usize
) -> i128 {
let mut temp_array_1 = *array_1;
let temp_array_2 = *array_2;
let mut best_distance = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum::<i128>();
temp_array_1.iter_mut().zip(temp_array_2.iter()).for_each(|(x, y)| *x += *y);
let mean = roll_subarray::<K, K_1>(&mut temp_array_1, start, end);
temp_array_1[K] -= mean * (hi as i64);
let distance: i128 = temp_array_1[start..end].iter().map(|&x| x.abs() as i128).sum();
if distance < best_distance {
best_distance = distance;
*array_1 = temp_array_1;
}
best_distance
}
fn reduce_by_twin<const K: usize, const K_1: usize>(
vec_1: &mut Vec<[i64; K_1]>,
vec_2: &mut Vec<[i64; K_1]>,
n: usize,
h_v: &[usize],
nb_chunck: usize
) {
let h_vec = h_v.to_vec();
let h_len = h_vec.len();
let len = vec_1.len();
let l = vec_2.len();
for k in 0..h_len {
let f = (k + 1) * K / nb_chunck;
for r in 1..(n + 1) {
let chunck_size = (len + rayon::current_num_threads() - 1) / rayon::current_num_threads();
let chunks: Vec<(usize, usize)> = (0..rayon::current_num_threads())
.map(|i| (i * chunck_size, (i + 1) * chunck_size))
.collect();
let results: Vec<Vec<[i64; K_1]>> = chunks.into_par_iter().map(|(start, end)| {
let mut local_vec_1 = vec_1[start..end.min(len)].to_vec();
for i in 0..local_vec_1.len() {
let mut vec_temp_sub = local_vec_1[i];
let mut vec_temp_add = local_vec_1[i];
let vec_tamp = local_vec_1[i];
let mut vec_two = vec_2.clone();
let mut bd = local_vec_1[i][0..f].iter().map(|&x| x.abs() as i128).sum::<i128>();
for j in (start + i + 1)..l {
let ds = find_a_twin_sub::<K, K_1>(&mut vec_temp_sub, &mut vec_two[j], h_vec[k], 0,f);
let da = find_a_twin_add::<K, K_1>(&mut vec_temp_add, &mut vec_two[j], h_vec[k], 0,f);
if ds < bd && ds < da {
bd = ds;
local_vec_1[i] = vec_temp_sub;
//println!("ds:{} r:{} i:{} j:{} k:{}",(ds as f64).log2(),r,i,j,k);
println!("ds:{} r:{} i:{} j:{} k:{}",ds.ilog2(),r,i,j,k);
} else if da < bd {
bd = da;
local_vec_1[i] = vec_temp_add;
//println!("da:{} r:{} i:{} j:{} k:{}",(da as f64).log2(),r,i,j,k);
println!("da:{} r:{} i:{} j:{} k:{}",da.ilog2(),r,i,j,k);
}
vec_temp_sub = vec_tamp;
vec_temp_add = vec_tamp;
}
}
local_vec_1
}).collect();
let mut index = 0;
for chunk in results {
for item in chunk {
vec_1[index] = item;
index += 1;
}
}
vec_2.copy_from_slice(vec_1);
}
}
}
fn roll_subarray<const K: usize, const K_1: usize>(array: &mut [i64; K_1], start: usize, end: usize) -> i64 {
let m = end - start;
assert!(m <= K, "m must be less than or equal to K");
let mut temp_array: Vec<i128> = array[start..end].iter().map(|&x| x as i128).collect();
let mut best_distance = i128::MAX;
let mut best_array = temp_array.clone();
let mut final_mean = 0;
for _ in 0..m {
let mean: i128 = temp_array.iter().sum::<i128>() / (m as i128);
let distance: i128 = temp_array.iter().map(|&x| (x - mean).abs()).sum();
if distance < best_distance {
best_distance = distance;
best_array.copy_from_slice(&temp_array);
final_mean = mean;
}
if let Some((index, _)) = temp_array.iter().enumerate().min_by_key(|&(_, &x)| x) {
temp_array[index] += 1i128 << 64;
}
}
let mod_mask = (1i128 << 64) - 1;
for (i, &value) in best_array.iter().enumerate() {
array[i + start] = ((value - final_mean) & mod_mask) as i64;
}
(final_mean & mod_mask) as i64
}
fn main() {
let n: usize = 40000;
const K: usize = 60;
const K_1: usize = 61;
let threshold = 65.5;
let n_chunck = 3;
let cs= K/n_chunck;
const NC: usize = 3;
let mut h_vec = [0; NC];
let h: usize = 30;
let n_round = 10;
let uniform_dist = Uniform::new(i64::MIN, i64::MAX).expect("Failed to create uniform distribution");
let sigma = 10.0;
let normal_dist = Normal::new(0.0, sigma).expect("Failed to create normal distribution");
let mut s = vec![0; K];
let step = K / n_chunck;
let step_size = (K as u64) / (n_chunck as u64);
let mut t = 0;
while t < h
{
let index = (rng().random::<u64>() % (K as u64)) as usize;
if s[index ] == 0
{
s[index ] = 1;
t += 1;
}
}
for i in 0..n_chunck
{
h_vec[i] = (s[i*cs..(i+1)*cs].iter().sum::<i64>()) as usize;
}
let mut vect: Vec<[i64; K + 1]> = (0..n)
.into_par_iter()
.map(|_| {
let mut rng = rand::rng();
let mut array: [i64; K + 1] = [0; K + 1];
for i in 0..K {
array[i] = uniform_dist.sample(&mut rng);
}
let sum: i64 = s.iter().zip(&array[..K]).map(|(&si, &ai)| si * ai).sum();
let error: i64 = normal_dist.sample(&mut rng) as i64;
let last_value = sum + error;
array[K] = last_value;
array
})
.collect();
let mut vect_2 = vect.clone();
reduce_by_twin::<K, K_1>(&mut vect, &mut vect_2, n_round, &h_vec, n_chunck);
let mut selected_vect: Vec<[i64; K + 1]> = vect.clone().into_par_iter()
.filter(|&array| {
let acc = array[..K].iter().map(|&x| x.abs() as i128).sum::<i128>() as i128;
(acc as f64).log2() <= threshold
})
.collect();
let mut b: Vec<i128> = vec![0; K];
let mut moyenne_a = vec![0i128; K];
let len = selected_vect.len();
for j in 0..len {
moyenne_a.iter_mut().zip(selected_vect[j].iter()).for_each(|(x, y)| { *x = *x + (*y).abs() as i128 });
b.iter_mut().zip(selected_vect[j].iter()).for_each(|(x, y)| { *x = *x + ((*y).signum() * selected_vect[j][K]) as i128 });
}
let mut max = moyenne_a[0];
moyenne_a.iter_mut().for_each(|x| { if *x > max { max = *x } });
let moy_b: i128 = (b.iter().sum::<i128>()) / (K as i128);
b.iter_mut().for_each(|x| { *x = *x - moy_b });
b.iter_mut().zip(moyenne_a.iter_mut()).for_each(|(x, y)| {
if *y != 0 {
*x = ((*x as f64) * ((max as f64) / (*y as f64))) as i128;
}
});
let mut r: Vec<f64> = vec![0.0; K];
for i in 0..K {
r[i] = (b[i] as f64) / ((K * len) as f64);
}
let mut moy0: f64 = 0.0;
let mut moy1: f64 = 0.0;
let mut compt0: f64 = 0.0;
let mut compt1: f64 = 0.0;
for i in 0..K {
if s[i] == 1 {
moy1 += r[i];
compt1 += 1.0;
} else {
moy0 += r[i];
compt0 += 1.0;
}
}
moy0 = moy0 / compt0;
moy1 = moy1 / compt1;
println!("moyenne des 1: {}", moy1);
println!("moyenne des 0: {}", moy0);
println!("compteur: {}", len);
println!("nombre d'echantillons: {}", n);
let mut round_key: Vec<i64> = vec![0; K];
let mut round_key_2: Vec<i64> = vec![0; K];
for i in 0..K {
if r[i] < 0.0 {
round_key[i] = 0;
} else {
round_key[i] = r[i].round() as i64;
}
if round_key[i] > 1 {
round_key[i] = 1;
}
}
for i in 0..K {
if r[i] - (-0.5) < 0.0 {
round_key_2[i] = 0;
} else {
round_key_2[i] = (r[i] - (-0.5)).round() as i64;
}
if round_key_2[i] > 1 {
round_key_2[i] = 1;
}
}
println!(" indice :");
for i in 0..K {
print!(" {}", i + 1);
}
println!();
println!(" round_key :");
for i in 0..K {
print!(" {}", round_key[i]);
}
println!();
println!("round_key_2 :");
for i in 0..K {
print!(" {}", round_key_2[i]);
}
println!();
println!(" vrai s :");
for i in 0..K {
print!(" {}", s[i]);
}
println!();
println!("differences :");
for i in 0..K {
if round_key[i] - (s[i] as i64) == 0 {
print!(" V ");
} else {
print!(" X ");
}
}
println!();
println!("differences2:");
for i in 0..K {
if round_key_2[i] - (s[i] as i64) == 0 {
print!(" V ");
} else {
print!(" X ");
}
}
println!();
println!("compteur: {}", len);
println!("nombre d'echantillons: {}", n);
}
// */