Random State
Published:
Random State
In optimization, we often thing of our system as rolling along the curve of the loss function. It’s following the local tangent towards the optimum until the slope is 0. There is an issue though, when the function you are rolling along is so paramaterized, that computing this slope (now-called a gradient) is very expensive. So instead a partial gradient is computed (non-deterministically) and a step is taken in that direction. THis technique is known as Stochastic Gradient Method and is the optimization algorithm behind so many of our most favorite systems.
That’s why when I was thinking that I would have a go at a blog, to help develop my writing skills, and voice, I couldn’t think of a more appropriate name than this method, which literally means going off on random tangents, with a hope that you’ll eventually converge at the point you wanted to reach.
In many ways that emulates exactly what I’ve experienced so far in life, education, and now I hope to emulate that here. This blog will be about a lot of things, some related to my work, some to my hobbies, some to the news, some to old memories, and mostly to whatever is on my mind.
My dad recently shared, The Simple Truth Behind Reading 200 Books a Year with me and I wanted to extend this idea to writing. As a PhD student and researcher, a lot of my time is devoted to both reading and writing, and while reading comprehension is not something I believe I particularly struggle with, writing (with-in academic circles, for lay-people, and even plain old texting) can often be a challenge for me. So I hope to challenge myself to write more frequently and on broader subjects.
That’s it for now, I hope this inital condition is a good starting point for what’s coming next, and I hope you come around more to see what I’m up to.
-Victor Ardulov