Hey y'all! Same blog, different (Lanyon themed) look.
If you are here for my theoretical articles or notes, they are here.
The sidebar also contains all posts separated by categories here.
Hope you enjoy!

Subscribe to Moments of Clarity Blog

* indicates required

Intuit Mailchimp

Yet Another Comparison of Svelte & React

Keywords: Reactive Frameworks, Svelte, React, JavaScript

About a year ago, I was introduced to the ever-evolving frontend world where JavaScript frameworks keep on getting released every single day and would like to share my comparison on two frameworks I’ve worked with the most. In no way am I an expert on either of these frameworks, but I think it’s a healthy exercise for me to understand them better by drawing a comparison between them.

React is a framework brought to us by Facebook and is widely considered the most popular JavaScript frontend framework there is. Svelte meanwhile is a relatively new framework introduced by Rich Harris, a journalist for the New York Times, focused on developing small-scale, light web applications. As far as Stack Overflow surveys go, there is an interesting split between them - while React is the most wanted framework, Svelte is the most loved framework among current developers. Based on my experience thus far, I kind of can see why that is. More on that later. For now, here are the major differences that I felt.

1. useState() in React Sucks

This is probably the biggest thing I can think about between React and Svelte. In Svelte, whenever you have a reactive variable you want to change (which also is used in rendering for-loops or conditionals), you just change it. Just foo = "bar" and you are done!

In React, though, a clunky state management solution is provided, where you have to first initialize the default value with useState(default) and then use the provided functions (e.g. foo and setFoo) to update your variables instead of assignment (as aforementioned for Svelte.)

Because I started out with Svelte and then React, I forget so many times (often for 2 hours at a time) that I can’t just use assignment and instead need to go with useState. Considering React is older than Svelte, it is somewhat expected that Svelte would have the edge on this one.

2. React Styling

In React, you can either attach another stylesheet or you can use inline styles (in JSON) to the individual components themselves. This is likely just personal preference, but I find it annoying to have to link each component to another css file (~2x more files that way), or to use inline styles.

Compared to this, Svelte offers either using inline styling or separate styles in the same file as your HTML the same way as vanilla HTML. This is one of the examples where Svelte demonstrates its ridiculously low learning curve.

3. Performance

React is said to treat your model as a blackbox and calculate the difference between what is currently on the page and what should be on the page in a theoretical DOM known as the virtual DOM. Svelte, on the other hand, takes the approach of not using the virtual DOM and instead compiling components at build in a way where the code will take care of whatever rendering changes need to take place. This has been noted to make Svelte have the edge in performance; the Svelte website goes as far as saying the idea that the vDOM makes applications faster is a “suprising resilient meme”.

The main idea stated by Svelte is that even if the DOM is slow, adding another vDOM will only make things slower as the native DOM will eventually have to be changed. Furthermore, they point that the useState function in React can in some cases lead to parts of applications rerendering even when not needed.

For fairness, I’ve only really argued for Svelte thus far (and based pretty closely on their own article.) However, other sources also confirm this and the fact that Svelte is nearly 26x more lightweight compared to React.

4. Popularity

It would be foolish to discard popularity in this discussion. React, hands down, has more popularity. Svelte has 62K stars, React has almost 200K and has been used in plenty of websites.

Thus, it follows that when it comes to UI libraries and other community open-source tools React likely will have much better support.

Final Remarks

Both Svelte and React are decent frameworks with intended uses. Given the popularity difference,Svelte makes sense to be primarily for much smaller stuff, whereas React is for building larger websites.

Back to what I was saying about the most wanted vs. most loved. Thus far, I still prefer Svelte to React. It’s extremely similar to raw HTML/CSS/JS and takes basically no effort to learn (very easy learning curve.) In fact, I think it’s easier to learn Svelte first than it is to learn HTML and JS first. However, despite me loving Svelte, I felt that I needed to learn React to collaborate with others on projects, such as the current project to build a coding competition website and grading server for our school’s computer science club with 3-4 other devs.

While React isn’t as good as Svelte for me, it isn’t terrible and it’s allowing bigger collaborations with more people. After all, that probably matters more.

Attempts at Closed-Form Logistic Regression

Keywords: Logistic Regression, Regression, Normal Equation, Logistic Function, Optimization

Let’s work our way up from the start. Linear Regression is a ubiquitous algorithm in machine learning today, which is most commonly performed through iterative gradient descent. However, another method that I find pretty fascinating is the closed-form solution called the Normal Equation, where instead of iteratively trying to minimize \(L = \sum_{i=1}^{N} (mx_{i} + b - y_{i})^2\), (\({m, b}\) are the parameters), a solution of what value of \(m\) sets \(\dfrac{dL}{dm}\) to 0 is found. The bias \(b\) is then found as \(\bar{y} - m\bar{x}\), where \(\bar{z}\) is the average of a set \(z\). Thus this solution is called closed-form as it takes a known amount of mathematical operations to solve.

However, this is old news. Unfortunately, this style of optimization is only possible for simple linear regression. The main reason for this is that the derivative of a linear model is really simple, compared to something like a composite neural network, which likely will require endless chain rule. The closest thing I could find to Linear Regression that has a different shape (hence ridge regression not included) is Logistic Regression, where predictions of \(y_{i}\) are modeled as \(\sigma(mx_{i} + b)\), where \(\sigma\) is the logistic (hence the name) function. The thing to remember though is that Logistic Regression is not really a regressor, but actually a binary classifier model.

My question is whether we actually can try to create a closed-form solution for Logistic Regression. Two years ago, I tried doing exactly this with such a lazy method that yieled itself a rightful 0 upvotes on r/MachineLearning. Instead of this really bad method, how about trying to edit the current linear Normal Equation for our case? A really good derivation of the Normal Equation can be found here - Perhaps, let’s see if we can modify the objective function to make it more logistic-y and see where that takes us?

The objective function is defined as \(L = \dfrac{(X\theta - y)^{T}(X\theta - y)}{N}\) (let’s stick to \(\theta\) instead of \(m\) to look smarter) where \(X\) is the training feature matrix, \(\theta\) is weight vector (multi-dimensional \(m\)), and \(y\) is the labels vector. We would modify this to contain \(\sigma(X\theta)- y\), and soon it becomes clear this approach probably will not work as the resulting expression (before differentiation) will contain a lot of \(\sigma(X\theta^{T})\sigma(X\theta)\) which is too annoying for my brain to work with. Nobody, promised there would be a solution right?

Another method that I think would be more serious, if only slightly, would be to modify the data itself with an inverse function. Exponential Regression does exactly this by applying a log to all the labels (y-values) to turn the exponential curves more linear. The model then learns a linear model to predict the \(\ln(y*{i})\) which is then exponentiated to give the actual function. Essentially, this just uses an inverse of the \(\sigma\) function before the linear outputs. So, let’s just calculate the inverse of the logistic/sigmoid function (\(\dfrac{1}{1 + e^{-x_{i}}}\))! This comes out to be \(-\ln(\dfrac{1}{x_{i}} - 1)\). Definitely not the worst thing in the world!

Let’s specify this inverse as \(\sigma'\) and take a step back. We need to apply the function \(\sigma'\) to every label to convert it to a line, train a linear model (closed-form), and then take linear productions and run them through \(\sigma\) (actual logistic function.) We probably should take a look at the domain of this inverse function, and see if our labels are valid in this range. Unfortunately the inverse of the logistic function is not defined for all real numbers and has semi-sharp asymptotes at \(x=0\) and \(x=1\). Even more unfortunate, our labels are binary integers of 0 or 1!

So far, we have tried a closed-form solution and then layering modifications of inputs and predictions on top of a standard linear closed-form solve. If anything, these examples demonstrate why the logistic regression has no closed-form solution. I believe it is always better to fail yourself than to take somebody else’s word that the transcedental equation is why there is no logistic regression closed-form solution. The intuitive explanation I could come up with is that closed-form solutions don’t work as well when not trying to draw a line not to predict values (regression) but instead trying to separate areas (classification).

This is the end of our journey. Please let me know what errors I may have made or whatever you find. Thanks.