5 Comments
Mar 21, 2022Liked by Prateek Joshi

Thank you for this detailed explanation

Expand full comment
author

Thanks Ambrish. Glad you liked it.

Expand full comment
Jun 12, 2022·edited Jun 12, 2022

Sounds like “Non-Euclidean” is the same thing as “non-spacial data or spatial data in more than 2 dimensions.” Is that accurate?

Also, I’m a little confused by the discussion about reducing the dimensionality of the data. You talk about 3D data being projected to 2D approximation and how that loses information. But when you talk about 2D data wouldn’t the correct anaology be projecting it to a 1D approximation?

Expand full comment
author

Thank you for the questions. You brings up a couple of interesting points. Non-Euclidean data is data in any space that doesn't obey Euclidean geometry, but it's not necessarily all spatial data in more than 2 dimensions. There can be n-dimensional Euclidean spaces too.

As you go to spaces in higher dimensions, they accommodate multiple types of geometries (including Euclidean geometry). One way to think about it is Euclidean geometry seeks to understand flat surfaces e.g. 2D planes. And non-Euclidean geometry seeks to understand curved surfaces, graphs, networks, and more. It covers all data that doesn't follow Euclidean geometry.

With regards to your question about the dimensionality of the data, my goal with that part of the article is to compare two types of 2D data: one that's natively in 2D vs one that's projected onto 2D from a higher dimension.

If the data is natively in 2D, then the assumptions of a deep learning model holds true because the data is natively Euclidean. If the data is projected onto 2D, then the assumptions of a deep learning won't hold true because the data is natively non-Euclidean.

When you take non-Euclidean data and try to represent it in Euclidean space (e.g. matrix) using a transformation, you tend to lose information about its structure and relationships. This is certainly true when you go from 2D to 1D as you mentioned in your comment.

Expand full comment

Thanks for the response. Makes more sense to me now. I remember covering general n-dimensional Euclidean spaces in college, but I initially misunderstood the analogy in this article and thought maybe there was a different meaning for the team in machine learning circles. I’m glad the ambiguity was only in my own headspace.

Expand full comment