<div dir="ltr"><div><div style="font-family:Verdana,Geneva,sans-serif">Dear colleagues,</div><div><p style="font-family:Verdana,Geneva,sans-serif">You are cordially invited to the general <span class="gmail-m_-1035930212357265755gmail-m_-511247236279062910gmail-m_6648689971568124339gmail-m_-7708224579866048151gmail-m_5790350133047742889gmail-m_-2850961817622685178il">seminar</span> organized by the Department of Mathematics, Atılım University.</p><p><font face="Verdana, Geneva, sans-serif">Our speaker is <b>Hande Alemdar</b></font><font face="Verdana, Geneva, sans-serif">.</font></p><p><font face="Verdana, Geneva, sans-serif"> The title of her talk is "</font>
<b>Ternary neural networks for energy efficient AI applications</b> <b><span style="font-family:Verdana,Geneva,sans-serif">".</span></b></p><p style="font-family:Verdana,Geneva,sans-serif">Date: <b>December 12</b>, <b>2018</b></p><p style="font-family:Verdana,Geneva,sans-serif">Time: <b>15:40</b></p><p style="font-family:Verdana,Geneva,sans-serif">Place: <b>FEF 404</b></p><p style="font-family:Verdana,Geneva,sans-serif">Please find the abstract of the talk below.</p><p style="font-family:Verdana,Geneva,sans-serif">With my best regards,<br></p><div><p style="font-family:Verdana,Geneva,sans-serif">On behalf of Seminar Committee,</p></div></div><div style="margin:0px;font-stretch:normal;line-height:normal"><span style="font-family:Verdana,Geneva,sans-serif">Burcu Gülmez Temür</span></div></div><div style="margin:0px;font-stretch:normal;line-height:normal"><b><br></b></div><div><b>Abstract:</b></div><div>Deep neural networks have achieved state-of-the-art results on a wide range of artificial intelligence tasks. However, the computation and storage requirements are usually quite high. This issue limits their deployability on ubiquitous computing devices such as smart phones, wearables and autonomous drones. In this talk, I will present ternary neural networks (TNNs) in order to make deep learning more resource-efficient. A TNN is a discretised neural network where the weights and activations are constraint to three values only (-1,0, and 1). Due to this extreme limitation, there is no standard training procedure for TNNs. I will introduce a novel teacher-student approach for training TNNs without compromising too much accuracy. Next, I will describe our purpose-built hardware architecture for TNNs and present benchmark results that demonstrate up to 3x better energy efficiency than the existing solutions.</div></div>