Skip to content
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,8 @@
generated for inclusion in BOLT 12 `Offer`s will no longer be accepted. As
most blinded message paths are ephemeral, this should only invalidate issued
BOLT 12 `Refund`s in practice (#3917).
* Blinded message paths included in BOLT 12 `Offer`s generated by LDK 0.2 will
not be accepted by prior versions of LDK after downgrade (#3917).
* Once a channel has been spliced, LDK can no longer be downgraded.
`UserConfig::reject_inbound_splices` can be set to block inbound ones (#4150)
* Downgrading after setting `UserConfig::enable_htlc_hold` is not supported
Expand Down
2 changes: 1 addition & 1 deletion fuzz/src/onion_message.rs
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ impl NodeSigner for KeyProvider {
}

fn get_expanded_key(&self) -> ExpandedKey {
unreachable!()
ExpandedKey::new([42; 32])
}

fn sign_invoice(
Expand Down
20 changes: 11 additions & 9 deletions lightning/src/blinded_path/payment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ use bitcoin::secp256k1::{self, PublicKey, Secp256k1, SecretKey};

use crate::blinded_path::utils::{self, BlindedPathWithPadding};
use crate::blinded_path::{BlindedHop, BlindedPath, IntroductionNode, NodeIdLookUp};
use crate::crypto::streams::ChaChaDualPolyReadAdapter;
use crate::crypto::streams::{ChaChaTriPolyReadAdapter, TriPolyAADUsed};
use crate::io;
use crate::io::Cursor;
use crate::ln::channel_state::CounterpartyForwardingInfo;
Expand Down Expand Up @@ -284,18 +284,20 @@ impl BlindedPaymentPath {
node_signer.ecdh(Recipient::Node, &self.inner_path.blinding_point, None)?;
let rho = onion_utils::gen_rho_from_shared_secret(&control_tlvs_ss.secret_bytes());
let receive_auth_key = node_signer.get_receive_auth_key();
let phantom_auth_key = node_signer.get_expanded_key().phantom_node_blinded_path_key;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could have None here in case the node isn't configured for phantom payments. I think that might help keep that case in mind for readers, and also saves some computation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, we don't currently have a separate method on the signer that returns None in cases where we aren't doing phantom so we'd have to add a new key-fetcher (or bool-fetcher) to decide whether to do phantom validation. I thought about doing that but it seemed way cleaner to reuse the ExpandedKey (which is already used for all the inbound-payment key logic, including phantom). Its also not very much compute at all to just do one round of poly1305 + the finish logic so I'm not sure its worth the extra work.

let read_arg = (rho, receive_auth_key.0, phantom_auth_key);

let encrypted_control_tlvs =
&self.inner_path.blinded_hops.get(0).ok_or(())?.encrypted_payload;
let mut s = Cursor::new(encrypted_control_tlvs);
let mut reader = FixedLengthReader::new(&mut s, encrypted_control_tlvs.len() as u64);
let ChaChaDualPolyReadAdapter { readable, used_aad } =
ChaChaDualPolyReadAdapter::read(&mut reader, (rho, receive_auth_key.0))
.map_err(|_| ())?;

match (&readable, used_aad) {
(BlindedPaymentTlvs::Forward(_), false)
| (BlindedPaymentTlvs::Dummy(_), true)
| (BlindedPaymentTlvs::Receive(_), true) => Ok((readable, control_tlvs_ss)),
let ChaChaTriPolyReadAdapter { readable, used_aad } =
ChaChaTriPolyReadAdapter::read(&mut reader, read_arg).map_err(|_| ())?;

match (&readable, used_aad == TriPolyAADUsed::NoAAD) {
(BlindedPaymentTlvs::Forward(_), true)
| (BlindedPaymentTlvs::Dummy(_), false)
| (BlindedPaymentTlvs::Receive(_), false) => Ok((readable, control_tlvs_ss)),
_ => Err(()),
}
}
Expand Down
70 changes: 45 additions & 25 deletions lightning/src/crypto/streams.rs
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ impl<'a, T: Writeable> Writeable for ChaChaPolyWriteAdapter<'a, T> {
}

/// Encrypts the provided plaintext with the given key using ChaCha20Poly1305 in the modified
/// with-AAD form used in [`ChaChaDualPolyReadAdapter`].
/// with-AAD form used in [`ChaChaTriPolyReadAdapter`].
pub(crate) fn chachapoly_encrypt_with_swapped_aad(
mut plaintext: Vec<u8>, key: [u8; 32], aad: [u8; 32],
) -> Vec<u8> {
Expand All @@ -84,34 +84,48 @@ pub(crate) fn chachapoly_encrypt_with_swapped_aad(
plaintext
}

#[derive(PartialEq, Eq)]
pub(crate) enum TriPolyAADUsed {
/// No AAD was used.
///
/// The HMAC validated with standard ChaCha20Poly1305.
NoAAD,
/// The HMAC vlidated using the first AAD provided.
A,
/// The HMAC vlidated using the second AAD provided.
B,
}

/// Enables the use of the serialization macros for objects that need to be simultaneously decrypted
/// and deserialized. This allows us to avoid an intermediate Vec allocation.
///
/// This variant of [`ChaChaPolyReadAdapter`] calculates Poly1305 tags twice, once using the given
/// key and once with the given 32-byte AAD appended after the encrypted stream, accepting either
/// being correct as sufficient.
/// This variant of [`ChaChaPolyReadAdapter`] calculates Poly1305 tags thrice, once using the given
/// key and twice with each of the two given 32-byte AADs appended after the encrypted stream,
/// accepting any being correct as sufficient.
///
/// Note that we do *not* use the provided AAD as the standard ChaCha20Poly1305 AAD as that would
/// Note that we do *not* use the provided AADs as the standard ChaCha20Poly1305 AAD as that would
/// require placing it first and prevent us from avoiding redundant Poly1305 rounds. Instead, the
/// ChaCha20Poly1305 MAC check is tweaked to move the AAD to *after* the the contents being
/// checked, effectively treating the contents as the AAD for the AAD-containing MAC but behaving
/// like classic ChaCha20Poly1305 for the non-AAD-containing MAC.
pub(crate) struct ChaChaDualPolyReadAdapter<R: Readable> {
pub(crate) struct ChaChaTriPolyReadAdapter<R: Readable> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you say there is sufficient test coverage on this? I think there is some higher-level coverage, but no direct unit test.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so? I mean if its horribly broken the higher-level code will also break. Its also somewhat indirectly fuzzed through the onion decoding fuzzing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think lower level unit tests do have value even if there are higher level tests already.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really sure what you'd want to test? That it can decrypt correctly? That's definitely well-covered. The one bug we did have in it was found by our fuzz test. More test coverage is great but more tests to get the same coverage seems like a net-negative.

pub readable: R,
pub used_aad: bool,
pub used_aad: TriPolyAADUsed,
}

impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32])> for ChaChaDualPolyReadAdapter<T> {
impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32], [u8; 32])>
for ChaChaTriPolyReadAdapter<T>
{
// Simultaneously read and decrypt an object from a LengthLimitedRead storing it in
// Self::readable. LengthLimitedRead must be used instead of std::io::Read because we need the
// total length to separate out the tag at the end.
fn read<R: LengthLimitedRead>(
r: &mut R, params: ([u8; 32], [u8; 32]),
r: &mut R, params: ([u8; 32], [u8; 32], [u8; 32]),
) -> Result<Self, DecodeError> {
if r.remaining_bytes() < 16 {
return Err(DecodeError::InvalidValue);
}
let (key, aad) = params;
let (key, aad_a, aad_b) = params;

let mut chacha = ChaCha20::new(&key[..], &[0; 12]);
let mut mac_key = [0u8; 64];
Expand All @@ -125,7 +139,7 @@ impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32])> for ChaChaDualPolyRea
let decrypted_len = r.remaining_bytes() - 16;
let s = FixedLengthReader::new(r, decrypted_len);
let mut chacha_stream =
ChaChaDualPolyReader { chacha: &mut chacha, poly: &mut mac, read_len: 0, read: s };
ChaChaTriPolyReader { chacha: &mut chacha, poly: &mut mac, read_len: 0, read: s };

let readable: T = Readable::read(&mut chacha_stream)?;
while chacha_stream.read.bytes_remain() {
Expand All @@ -142,14 +156,18 @@ impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32])> for ChaChaDualPolyRea
mac.input(&[0; 16][0..16 - (read_len % 16)]);
}

let mut mac_aad = mac;
let mut mac_aad_a = mac;
let mut mac_aad_b = mac;

mac_aad.input(&aad[..]);
mac_aad_a.input(&aad_a[..]);
mac_aad_b.input(&aad_b[..]);
// Note that we don't need to pad the AAD since its a multiple of 16 bytes

// For the AAD-containing MAC, swap the AAD and the read data, effectively.
mac_aad.input(&(read_len as u64).to_le_bytes());
mac_aad.input(&32u64.to_le_bytes());
mac_aad_a.input(&(read_len as u64).to_le_bytes());
mac_aad_b.input(&(read_len as u64).to_le_bytes());
mac_aad_a.input(&32u64.to_le_bytes());
mac_aad_b.input(&32u64.to_le_bytes());

// For the non-AAD-containing MAC, leave the data and AAD where they belong.
mac.input(&0u64.to_le_bytes());
Expand All @@ -158,23 +176,25 @@ impl<T: Readable> LengthReadableArgs<([u8; 32], [u8; 32])> for ChaChaDualPolyRea
let mut tag = [0 as u8; 16];
r.read_exact(&mut tag)?;
if fixed_time_eq(&mac.result(), &tag) {
Ok(Self { readable, used_aad: false })
} else if fixed_time_eq(&mac_aad.result(), &tag) {
Ok(Self { readable, used_aad: true })
Ok(Self { readable, used_aad: TriPolyAADUsed::NoAAD })
} else if fixed_time_eq(&mac_aad_a.result(), &tag) {
Ok(Self { readable, used_aad: TriPolyAADUsed::A })
} else if fixed_time_eq(&mac_aad_b.result(), &tag) {
Ok(Self { readable, used_aad: TriPolyAADUsed::B })
} else {
return Err(DecodeError::InvalidValue);
}
}
}

struct ChaChaDualPolyReader<'a, R: Read> {
struct ChaChaTriPolyReader<'a, R: Read> {
chacha: &'a mut ChaCha20,
poly: &'a mut Poly1305,
read_len: usize,
pub read: R,
}

impl<'a, R: Read> Read for ChaChaDualPolyReader<'a, R> {
impl<'a, R: Read> Read for ChaChaTriPolyReader<'a, R> {
// Decrypts bytes from Self::read into `dest`.
// After all reads complete, the caller must compare the expected tag with
// the result of `Poly1305::result()`.
Expand Down Expand Up @@ -349,15 +369,15 @@ mod tests {
}

#[test]
fn short_read_chacha_dual_read_adapter() {
// Previously, if we attempted to read from a ChaChaDualPolyReadAdapter but the object
fn short_read_chacha_tri_read_adapter() {
// Previously, if we attempted to read from a ChaChaTriPolyReadAdapter but the object
// being read is shorter than the available buffer while the buffer passed to
// ChaChaDualPolyReadAdapter itself always thinks it has room, we'd end up
// ChaChaTriPolyReadAdapter itself always thinks it has room, we'd end up
// infinite-looping as we didn't handle `Read::read`'s 0 return values at EOF.
let mut stream = &[0; 1024][..];
let mut too_long_stream = FixedLengthReader::new(&mut stream, 2048);
let keys = ([42; 32], [99; 32]);
let res = super::ChaChaDualPolyReadAdapter::<u8>::read(&mut too_long_stream, keys);
let keys = ([42; 32], [98; 32], [99; 32]);
let res = super::ChaChaTriPolyReadAdapter::<u8>::read(&mut too_long_stream, keys);
match res {
Ok(_) => panic!(),
Err(e) => assert_eq!(e, DecodeError::ShortRead),
Expand Down
15 changes: 10 additions & 5 deletions lightning/src/crypto/utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ macro_rules! hkdf_extract_expand {
let (k1, k2, _) = hkdf_extract_expand!($salt, $ikm);
(k1, k2)
}};
($salt: expr, $ikm: expr, 6) => {{
($salt: expr, $ikm: expr, 7) => {{
let (k1, k2, prk) = hkdf_extract_expand!($salt, $ikm);

let mut hmac = HmacEngine::<Sha256>::new(&prk[..]);
Expand All @@ -47,18 +47,23 @@ macro_rules! hkdf_extract_expand {
hmac.input(&[6; 1]);
let k6 = Hmac::from_engine(hmac).to_byte_array();

(k1, k2, k3, k4, k5, k6)
let mut hmac = HmacEngine::<Sha256>::new(&prk[..]);
hmac.input(&k6);
hmac.input(&[7; 1]);
let k7 = Hmac::from_engine(hmac).to_byte_array();

(k1, k2, k3, k4, k5, k6, k7)
}};
}

pub fn hkdf_extract_expand_twice(salt: &[u8], ikm: &[u8]) -> ([u8; 32], [u8; 32]) {
hkdf_extract_expand!(salt, ikm, 2)
}

pub fn hkdf_extract_expand_6x(
pub fn hkdf_extract_expand_7x(
salt: &[u8], ikm: &[u8],
) -> ([u8; 32], [u8; 32], [u8; 32], [u8; 32], [u8; 32], [u8; 32]) {
hkdf_extract_expand!(salt, ikm, 6)
) -> ([u8; 32], [u8; 32], [u8; 32], [u8; 32], [u8; 32], [u8; 32], [u8; 32]) {
hkdf_extract_expand!(salt, ikm, 7)
}

#[inline]
Expand Down
4 changes: 2 additions & 2 deletions lightning/src/ln/blinded_payment_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1694,7 +1694,7 @@ fn route_blinding_spec_test_vector() {
}
Ok(SharedSecret::new(other_key, &node_secret))
}
fn get_expanded_key(&self) -> ExpandedKey { unreachable!() }
fn get_expanded_key(&self) -> ExpandedKey { ExpandedKey::new([42; 32]) }
fn get_node_id(&self, _recipient: Recipient) -> Result<PublicKey, ()> { unreachable!() }
fn sign_invoice(
&self, _invoice: &RawBolt11Invoice, _recipient: Recipient,
Expand Down Expand Up @@ -2009,7 +2009,7 @@ fn test_trampoline_inbound_payment_decoding() {
}
Ok(SharedSecret::new(other_key, &node_secret))
}
fn get_expanded_key(&self) -> ExpandedKey { unreachable!() }
fn get_expanded_key(&self) -> ExpandedKey { ExpandedKey::new([42; 32]) }
fn get_node_id(&self, _recipient: Recipient) -> Result<PublicKey, ()> { unreachable!() }
fn sign_invoice(
&self, _invoice: &RawBolt11Invoice, _recipient: Recipient,
Expand Down
74 changes: 74 additions & 0 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13416,6 +13416,45 @@ macro_rules! create_offer_builder { ($self: ident, $builder: ty) => {

Ok(builder.into())
}

/// Creates an [`OfferBuilder`] such that the [`Offer`] it builds is recognized by any
/// [`ChannelManager`] (or [`OffersMessageFlow`]) using the same [`ExpandedKey`] (as returned
/// from [`NodeSigner::get_expanded_key`]). This allows any nodes participating in a BOLT 11
/// "phantom node" cluster to also receive BOLT 12 payments.
///
/// Note that, unlike with BOLT 11 invoices, BOLT 12 "phantom" offers do not in fact have any
/// "phantom node" appended to receiving paths. Instead, multiple blinded paths are simply
/// included which terminate at different final nodes.
///
/// `other_nodes_channels` must be set to a list of each participating node's `node_id` (from
/// [`NodeSigner::get_node_id`] with a [`Recipient::Node`]) and its channels.
///
/// `path_count_limit` is used to limit the number of blinded paths included in the resulting
/// [`Offer`]. Note that if this is less than the number of participating nodes (i.e.
/// `other_nodes_channels.len() + 1`) not all nodes will participate in receiving funds.
/// Because the parameterized [`MessageRouter`] will only get a chance to limit the number of
/// paths *per-node*, it is important to set this for offers which will be included in a QR
/// code.
///
/// See [`Self::create_offer_builder`] for more details on the blinded path construction.
///
/// [`ExpandedKey`]: inbound_payment::ExpandedKey
pub fn create_phantom_offer_builder(
&$self, other_nodes_channels: Vec<(PublicKey, Vec<ChannelDetails>)>,
path_count_limit: usize,
) -> Result<$builder, Bolt12SemanticError> {
let mut peers = Vec::with_capacity(other_nodes_channels.len() + 1);
peers.push(($self.get_our_node_id(), $self.get_peers_for_blinded_path()));
for (node_id, peer_chans) in other_nodes_channels {
peers.push((node_id, Self::channel_details_to_forward_nodes(peer_chans)));
}

let builder = $self.flow.create_phantom_offer_builder(
&*$self.entropy_source, peers, path_count_limit
)?;

Ok(builder.into())
}
} }

macro_rules! create_refund_builder { ($self: ident, $builder: ty) => {
Expand Down Expand Up @@ -14045,6 +14084,41 @@ where
now
}

/// Converts a list of channels to a list of peers which may be suitable to receive onion
/// messages through.
fn channel_details_to_forward_nodes(
mut channel_list: Vec<ChannelDetails>,
) -> Vec<MessageForwardNode> {
channel_list.sort_unstable_by_key(|chan| chan.counterparty.node_id);
let mut res = Vec::new();
// TODO: When MSRV reaches 1.77 use chunk_by
let mut start = 0;
while start < channel_list.len() {
let counterparty_node_id = channel_list[start].counterparty.node_id;
let end = channel_list[start..]
.iter()
.position(|chan| chan.counterparty.node_id != counterparty_node_id)
.map(|pos| start + pos)
.unwrap_or(channel_list.len());

let peer_chans = &channel_list[start..end];
if peer_chans.iter().any(|chan| chan.is_usable)
&& peer_chans.iter().any(|c| c.counterparty.features.supports_onion_messages())
{
res.push(MessageForwardNode {
node_id: peer_chans[0].counterparty.node_id,
short_channel_id: peer_chans
.iter()
.filter(|chan| chan.is_usable)
.filter_map(|chan| chan.short_channel_id)
.min(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why min?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its the oldest, matching the existing logic we have for the non-phantom case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that to maximize the chances that it still exist? Curious where the existing logic is located also.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, basically. Absent any better criteria we assume that the oldest channel is most likely to stick around. We do it in get_peers_for_blinded_path as well using funded_channel.context.channel_creation_height

})
}
start = end;
}
res
}

fn get_peers_for_blinded_path(&self) -> Vec<MessageForwardNode> {
let per_peer_state = self.per_peer_state.read().unwrap();
per_peer_state
Expand Down
32 changes: 26 additions & 6 deletions lightning/src/ln/functional_test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4404,21 +4404,41 @@ pub fn create_chanmon_cfgs(node_count: usize) -> Vec<TestChanMonCfg> {

pub fn create_chanmon_cfgs_with_legacy_keys(
node_count: usize, predefined_keys_ids: Option<Vec<[u8; 32]>>,
) -> Vec<TestChanMonCfg> {
create_chanmon_cfgs_internal(node_count, predefined_keys_ids, false)
}

pub fn create_phantom_chanmon_cfgs(node_count: usize) -> Vec<TestChanMonCfg> {
create_chanmon_cfgs_internal(node_count, None, true)
}

pub fn create_chanmon_cfgs_internal(
node_count: usize, predefined_keys_ids: Option<Vec<[u8; 32]>>, phantom: bool,
) -> Vec<TestChanMonCfg> {
let mut chan_mon_cfgs = Vec::new();
let phantom_seed = if phantom { Some(&[42; 32]) } else { None };
for i in 0..node_count {
let tx_broadcaster = test_utils::TestBroadcaster::new(Network::Testnet);
let fee_estimator = test_utils::TestFeeEstimator::new(253);
let chain_source = test_utils::TestChainSource::new(Network::Testnet);
let logger = test_utils::TestLogger::with_id(format!("node {}", i));
let persister = test_utils::TestPersister::new();
let seed = [i as u8; 32];
let keys_manager = if predefined_keys_ids.is_some() {
let mut seed = [i as u8; 32];
if phantom {
// We would ideally randomize keys on every test run, but some tests fail in that case.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that ideal? Aren't non-deterministic tests ideal?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've historically had tests that accidentally pass because of key ordering or hashmap ordering, which can hide bugs. IMO tests should be robust against basic things like that (that might change in future PRs anyway!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are pros and cons. Fully deterministic tests are desirable too sometimes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed there's tradeoffs. In this case its not a behavior change so there aren't many :p

// Instead, we only randomize in the phantom case.
use core::hash::{BuildHasher, Hasher};
// Get a random value using the only std API to do so - the DefaultHasher
let rand_val = std::collections::hash_map::RandomState::new().build_hasher().finish();
seed[..8].copy_from_slice(&rand_val.to_ne_bytes());
}
let keys_manager = test_utils::TestKeysInterface::with_settings(
&seed,
Network::Testnet,
// Use legacy (V1) remote_key derivation for tests using legacy key sets.
test_utils::TestKeysInterface::with_v1_remote_key_derivation(&seed, Network::Testnet)
} else {
test_utils::TestKeysInterface::new(&seed, Network::Testnet)
};
predefined_keys_ids.is_some(),
phantom_seed,
);
let scorer = RwLock::new(test_utils::TestScorer::new());

// Set predefined keys_id if provided
Expand Down
Loading
Loading