[Turkmath:7372] Feza Gürsey Fizik ve Matematik UygAr Merkezi::Genel Seminer::30 Ocak 2026::Cuma Saat 13:30

kazimilhan.ikeda kazimilhan.ikeda at bogazici.edu.tr
Mon Jan 26 07:26:53 UTC 2026


Değerli Matematikçiler,

30 Ocak 2026 Cuma günü saat 13:30'da yüz yüze yapılacak olan Feza Gürsey 
Fizik ve Matematik UygAr Merkezi Genel Seminerlerinde Semih Özlem 
(Mudanya Üniversitesi) konuşma yapacaktır.

Konuşmanın detayları ve ulaşım bilgisi aşağıdadır.

Saygılarımla,
İlhan İkeda

-------------

Tarih/saat/yer: 30 Ocak 2026 Cuma; 13:30; Feza Gürsey Fizik ve Matematik 
UygAr Merkezi Binası.

Konuşmacı: Semih Özlem (Mudanya Üniversitesi)

Konuşma başlığı:
Weakened axioms, idempotent splittings, and the structure of learning: 
 From algebra to AI

Abstract:
We often think of mathematics as a tower of abstractions, but it begins 
with something deeply human: the act of telling things apart. In this 
talk, I'll explore how this simple idea—splitting and focusing—manifests 
across different fields, from linear algebra to motives to machine 
learning. We'll start with a basic observation: if we relax the unit 
axiom in a vector space, the scalar multiplication by 1 becomes an 
idempotent, splitting the space into what is preserved and what is 
annihilated. This splitting phenomenon appears in surprising places: in 
the theory of motives, where projectors decompose varieties; in knot 
theory, where Jones–Wenzl projectors filter diagram algebras; and in 
deep learning, where attention mechanisms focus on relevant features. 
I'll introduce the topos-theoretic model of neural networks 
(Belfiore–Bennequin) and suggest that learning difficulties like 
catastrophic forgetting and generalization gaps can be viewed as 
homotopical obstructions to achieving "nice" (fibrant) network states. 
Architectural tools like residual connections and attention can then be 
seen as learned, conditional idempotents—adaptable splitters that help 
networks organize information. This talk is an invitation to think 
structurally across disciplines. I won't present finished theorems, but 
a framework of connections that links motivic philosophy, categorical 
algebra, and the practice of machine learning. The goal is to start a 
conversation: can tools from pure mathematics—obstruction theory, 
homotopy colimits, derivators—help us design more robust, interpretable, 
and composable learning systems? No expertise in motives, knots, or AI 
is required—only curiosity about how ideas weave together.

Konuşma ile ilgili detaylar: 
https://www.turkmath.org/beta/seminer.php?id_seminer=4170
Ulaşım: https://fezagursey.bogazici.edu.tr/tr/ulasim


More information about the Turkmath mailing list