Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Save some memory in simplex constrain #3168

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

WardBrian
Copy link
Member

Summary

I was looking at simplex_constrain's rev implementation and realized two things:

  1. It duplicates the prim function for the forward pass in order to store the zs, but is otherwise the same
  2. We are exactly re-computing each z in the reverse pass, anyway (at least for the version that takes lp)

So, some napkin math, we can save ~20% of the memory overhead of the rev implementation by just not storing these, and simplify the code by delegating the forward pass to prim.

The same is very nearly true for the stochastic matrices, except the forward pass inside the rev specialization is written in a more vectorized style than the prim one is, so it isn't a direct swap. It's still true that we are re-computing something we're also storing, so I still save the memory there, I just don't replace the forward with a call to prim.

If we want, I can move the vectorized stochastic code into prim and then do the other half of the change to rev for those functions as well.

Tests

Existing tests pass

Side Effects

None

Release notes

Reduce the memory overhead of the simplex constraints in reverse mode.

Checklist

  • Copyright holder: Simons Foundation

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@WardBrian WardBrian requested a review from SteveBronder March 25, 2025 15:04
@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.45 0.36 1.27 21.45% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.04 3.58% faster
gp_regr/gen_gp_data.stan 0.03 0.03 1.11 10.28% faster
gp_regr/gp_regr.stan 0.11 0.1 1.18 15.09% faster
sir/sir.stan 86.11 73.66 1.17 14.46% faster
irt_2pl/irt_2pl.stan 6.03 4.48 1.35 25.74% faster
eight_schools/eight_schools.stan 0.07 0.06 1.15 12.87% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.31 0.27 1.17 14.78% faster
pkpd/one_comp_mm_elim_abs.stan 23.38 21.17 1.1 9.43% faster
garch/garch.stan 0.61 0.45 1.35 26.12% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 3.39 2.81 1.2 16.95% faster
arK/arK.stan 2.17 1.87 1.17 14.23% faster
gp_pois_regr/gp_pois_regr.stan 3.4 3.03 1.12 10.86% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 10.51 13.85 0.76 -31.8% slower
performance.compilation 225.77 230.36 0.98 -2.03% slower
Mean result: 1.1418578467749307

Jenkins Console Log
Blue Ocean
Commit hash: 4adb4ac5420f16dca008d1b8a4c39b44dd6fea4d


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 3357.744
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

@spinkney
Copy link
Collaborator

Nice find! This reminds me that we should put in the ILR based simplex instead of stick breaking. My only hold up is that I like to keep the stick breaking code as well. We haven't come to a consensus on how we add a different parameterization of the same constraint type.

@WardBrian
Copy link
Member Author

WardBrian commented Mar 27, 2025

@bob-carpenter has stated before that he's of the opinion that we just completely replace stick-breaking with the ILR transform for simplexes, rather than provide an option

Another ~easy option is to provide some compile time define to switch, rather than deciding on a language syntax level option

@spinkney
Copy link
Collaborator

@bob-carpenter has stated before that he's of the opinion that we just completely replace stick-breaking with the ILR transform for simplexes, rather than provide an option

The ILR one is super easy. We construct a zero_sum_vector (just like we have already) and then softmax it. There's a jacobian adjustment for the softmax.

vector[N - 1] y; // free param
vector[N] z = sum_to_zero_constrain(y);
vector[N] s = softmax(z); // simplex

jacobian_adj = -N * log_sum_exp(z) + 0.5 * log(N);

Another ~easy option is to provide some compile time define to switch, rather than deciding on a language syntax level option

Whatever is easiest. I'd like to save all the transforms in some directory. We can have the default ones be the ones we think are "best" from what we've seen and all the other ones stored separately. I think this requires a lot of repo organizing though.

@bob-carpenter
Copy link
Member

We need to choose a default and I think this should be it. If there are cases where we'd still want to use the old one, we can leave in the constraining/unconstraining functions for it and let people do it manually.

@spinkney
Copy link
Collaborator

We need to choose a default and I think this should be it. If there are cases where we'd still want to use the old one, we can leave in the constraining/unconstraining functions for it and let people do it manually.

Ah yea I saw that the constraint functions are getting exposed! Cool, that's works then.

@WardBrian we can do this in 2 loops using the online softmax https://arxiv.org/abs/1805.02867. The first loop constructs the sum to zero, get the max value of that sum to zero vec, and returns the sum of exponentials with the max subtracted. The second loop does the safe exponential of the sum to zero vector with the max subtracted and divides by the sum of exponentials output from the first loop. The jacobian is output after.

inline plain_type_t<Vec> simplex_ilr_constrain(const Vec& y, Lp& lp) {
  const auto N = y.size();

  plain_type_t<Vec> z = Eigen::VectorXd::Zero(N + 1);
  if (unlikely(N == 0)) {
    return z;
  }

  auto&& y_ref = to_ref(y);
  value_type_t<Vec> sum_w(0);

  // new   
  double d = 0;  // sum of exponentials
  double max_val = 0;
  double max_val_old = 0;

  for (int i = N; i > 0; --i) {
    double n = static_cast<double>(i);
    auto w = y_ref(i - 1) * inv_sqrt(n * (n + 1));
    sum_w += w;

    z.coeffRef(i - 1) += sum_w;
    z.coeffRef(i) -= w * n;
    
    // new
    max_val = max(max_val_old, z.coeff(i));
    d = d * exp(max_val_old - max_val) + exp(z.coeff(i) - max_val);
    max_val_old = max_val;
  }

  // new loop
  for (int i = 0; i < N; ++i) {
   z.coeffRef(i) = exp(z.coeff(i) - max_val) / d;
  }

  lp += -N * log(d) + 0.5 * log(N);

  return z;
 }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants