Step function-constrained optimization
\begin{equation} \min_{\mathbf{x}\in\mathbb{R}^{K}} ~~ f(\mathbf{x}),~~~~ \mbox{s.t.}~~ \parallel\mathbf{G}(\mathbf{x})\parallel_0^+\leq s,~\mathbf{x}\in \Omega \tag{SFCO} \end{equation}
where $\mathbf{G}(\mathbf{x})\in\mathbb{R}^{M \times N}$ with each entry being $G_{ij}(\mathbf{x})$, $1\leq i\leq M$, $1\leq j \leq N$, functions $f:\mathbb{R}^{K}\rightarrow \mathbb{R}$ and $G_{ij}:\mathbb{R}^{K}\rightarrow \mathbb{R}$ are (preferably twice) continuously differentiable, $\Omega\subseteq\mathbb{R}^{K}$ is a closed and convex set, and $s\ll N$ is an integer. For a matrix $\mathbf{Z}\in\mathbb{R}^{M \times N}$, measure $\|\mathbf{Z}\|_0^+$ counts the number of its columns that have positive entries, i.e., \begin{equation}\|\mathbf{Z}\|_0^+= \mathrm{step}\Big(\max_{i=1,\ldots,M} Z_{i1}\Big)+\cdots+\mathrm{step}\Big(\max_{i=1,\ldots,M} Z_{iN}\Big)\nonumber\end{equation} Here, $\mathrm{step}(t)$ is the step function (or 0/1 loss function) defined by $\mathrm{step}(t)=1$ if $t>0$ and $\mathrm{step}(t)=0$ otherwise. In particular, when $M=1$, for a vector $\mathbf{z}\in\mathbb{R}^{N}$, let $\mathbf{z}_+=(\max\{0,z_1\},\ldots,\max\{0,z_N\})^\top$ and $\parallel\mathbf{z}\parallel_0$ denote its L0 norm that counts the number of its nonzero entries. As a result, \begin{equation*}\|\mathbf{z}\|_0^+= \mathrm{step}(z_1)+\cdots+\mathrm{step}(z_N)=\|\mathbf{z}_+\|_0\end{equation*}
The solver can be downloaded here - SNSCO, which was developed from the following paper:
S Zhou, L Pan, N Xiu, and G Li, A 0/1 constrained optimization solving sample average approximation for chance constrained programming, MOR, 2024.