![]() ![]() In 2019, I also co-chaired the 3rd Midwest Machine Learning Symposium ( MMLS). In 20, I was the program co-chair for MLSys. In 2018, I co-founded the conference on Machine Learning & Systems (MLSys), a new conference that targets research at the intersection of systems and machine learning. (2009) and ECE Diploma (2007) from the Technical University of Crete (TUC), located in the beautiful city of Chania. Before UT, I spent 3.5 years as a grad student at USC.īefore all that, I received my M.Sc. in 2014 from UT Austin, where I was fortunate to be advised byĪlex Dimakis. I am particularly interested in the theory and practice of large-scale machine learning and the challenges that arise once we aim to build solutions that come with robustness and scalability guarantees.īefore coming to Madison, I spent two wonderful years as a postdoc at UC Berkeley, where I was a member of the AMPLab and BLISS, and had the pleasure to collaborate with ![]() My research lies in the intersection of machine learning, coding theory, and distributed optimization. When performing dequantization in the GPU with the previously mentioned New York City 3D tileset, there is a 52% savings in memory used by the GPU with no difference in file size, no differences in visual quality, and no impact on total tileset loading time as compared to decoding entirely with the Draco decoding module.I am the Jay & Cynthia Ihlenfeld Associate Professor of Electrical and Computer Engineering (and CS by courtesy) at the University of Wisconsin-Madison, a faculty fellow at the Grainger Institute, and a faculty affiliate with the Optimization group at the Wisconsin Institute for Discovery. This results in a smaller memory footprint, both in the main application thread running on the CPU and in parallel on the GPU. Rack Draco thoughtfully designed, tournament grade pool table boasts premium K66 bumpers, dark red woolen blend felt, scratch-resistant rail coating, automatic return ball system, customized chrome leg levelers & an extra-sturdy MDF deck, guaranteeing reliable, even, ideal ball bounce and roll better than pricier professional pool tables. The smaller decoded data can be passed to the GPU where the dequantization or oct-decoding operations are performed in the shader when rendering. When decoding on the GPU, we skip the quantization or octahedron transform operation in the Draco decoder module and instead retrieve and store any transformation constants. Additionally for attributes that are unit vectors such as normals, we can decode as oct-encoded data. Vertex attributes that are usually stored as 32-bit floating point numbers, such as position attribute data, can be decoded as quantized 16-bit integer values. Dequantization on the GPUĬesium also decodes some attributes on the GPU, decoding outside of the main thread and using less memory. When supported by the browser, we load and compile the decoding module Web Assembly binary and share it across multiple workers, further increasing the speed of decompression as opposed to using a pure JavaScript solution. We can retrieve each segment of the encoded buffer and pass the data to separate workers to asynchronously decode in parallel before returning the data necessary to render the mesh to the main thread. ![]() Furthermore, each primitive (or part) of a mesh can be decoded separately for faster decoding of complex models. In the case of 3D Tiles, this means multiples tiles can be streamed and decoded simultaneously. Cesium takes advantage of Web Workers to decode multiple meshes in parallel.
0 Comments
Leave a Reply. |