Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

may help understand, why you think your method coefficients are well reconstructed #3

Open
Sandy4321 opened this issue Aug 7, 2019 · 12 comments

Comments

@Sandy4321
Copy link

may help understand, why you think your method coefficients from fig 4 are reconstructed well
when they are still very different from original from fig 1

image

@vene
Copy link
Owner

vene commented Aug 7, 2019

4 is clearly better at identifying the support, although it gets the sign wrong for some of the coefficients. In any case this is a synthetic example, attempting to reproduce a figure from the cited paper: please refer to the paper itself (http://www.jmlr.org/proceedings/papers/v51/figueiredo16.pdf) for more information.

@Sandy4321
Copy link
Author

yes they do have perfect coefficients reconstruction

but in your case also amplitude and also sings are the problem
do you know another OWL implementations to try?

image

@vene
Copy link
Owner

vene commented Aug 7, 2019 via email

@Sandy4321
Copy link
Author

so what can be done to make your code run correctly?

@vene
Copy link
Owner

vene commented Aug 7, 2019 via email

@Sandy4321
Copy link
Author

but original true coefficients around 25 samples have negative sign and amplitude of 0.2 (figure1) then your code gives positive and negative signs and amplitude is 0.050 (figure4) ?

@vene
Copy link
Owner

vene commented Aug 7, 2019 via email

@Sandy4321
Copy link
Author

I see thanks
I assumed that as in paper
Figure 1: Toy example illustrating the qualitatively different behaviour of OWL and LASSO
perfect reconstruction possible?
as written in paper
Of course, you are ware about this difference, therefore it would be very interesting how you explain this difference.
Do they cheating by claiming , that they can achieve perfect reconstruction?
as they wrote:"while OWL successfully
recovers its structure"

We conclude this section with a simple toy example (Fig. 1)
illustrating the qualitatively different behaviour of OWL
and LASSO regularization. In this example, p = 100,
n = 10, and x
? has 20 non-zero components in 2 groups of
size 10, with the corresponding columns of A being highly
correlated. Clearly, n = 10 is insufficient to allow LASSO
to recover x
?
, which is 20-sparse, while OWL successfully
recovers its structure.

@vene
Copy link
Owner

vene commented Aug 7, 2019 via email

@Sandy4321
Copy link
Author

as you wrote

I attribute this recovery error to be due to to
experimental setup and hyperparameters, not to any algorithmic bugs

it is axactly what I asked above

so what can be done to make your code run correctly?

Then answer is: "try tune hyperparameters", I am right?

@vene
Copy link
Owner

vene commented Aug 7, 2019 via email

@Sandy4321
Copy link
Author

And keep in mind that what works on a simulated example might not work the same way on real data.
But in paper they show simulation results?
By the way, is it suitable for logistics regression?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants