Skip to content

reparametrization/marginalization for feature mask #44

@DimGorr

Description

@DimGorr

Hi!
I have a question about feature selection part. In the article claim that you in order to solve the potential issue with importnat features whose values are zeros you use 1) Monte Carlo Estimate to sample feature subset and then you use 2) reparametrization

So the questions I have are:
a) I'm struggling to find the implementation of 1). Could you maybe help me with that? And if it's not implemented would it be a problem for important features that are equal to zeros?
b) I found the reparametrization part bu it's set by default to be False (the variable marginalize below; the code is taken from class ExplainModule(nn.Module), function forward). And I don't see any places where this marginalization would be set to be True. Is it just beacause you forgot to change it back after some testing or does reparametrization worsen the results?

if marginalize:
    std_tensor = torch.ones_like(x, dtype=torch.float) / 2
    mean_tensor = torch.zeros_like(x, dtype=torch.float) - x
    z = torch.normal(mean=mean_tensor, std=std_tensor)
    x = x + z * (1 - feat_mask)

c) If you know that does pytorch geometric has exactly the same implementation as here?

I'm actually asking this to find out if pytorch geometric could mistakenly show that some feature is unimportant if it was equal to zero:) Thank you in advance!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions