
Interview with DINQ: “We Will Use Agents to Eliminate Information Asymmetry in the AI Industry and Make Pricing Transparent”
TechFlow Selected TechFlow Selected

Interview with DINQ: “We Will Use Agents to Eliminate Information Asymmetry in the AI Industry and Make Pricing Transparent”
After taking the DINQ valuation test, I had an emotional breakdown: Yao Shunyu is valued at $10 million, while I’m only worth a base salary of ¥1,000.
In the AI community, if you haven’t yet been “roasted” by DINQ, you probably haven’t truly entered the circle.
The product’s rise to fame is utterly “absurd”: not only can it extend irresistible job offers to top talent, but it also comes with a brutally candid “AI roast” feature. Just drop in a GitHub link or a Google Scholar profile—and the AI instantly transforms into a sharp-tongued interviewer, zeroing in on your citation count and code contributions with surgical precision.
![]()
By analyzing Yao Shunyu’s papers, citations, work history, and education background, DINQ predicted a $10 million compensation package.
This masochistic urge to “get roasted” has unexpectedly united researchers worldwide across social frontlines. From Stanford labs to Silicon Valley cafés, screenshots of personal valuation estimates are being shared everywhere. When Yao Shunyu was assessed at a $10 million valuation—and pitted against another researcher in an informal “showdown”—this tiny eight-person team, fresh off a multi-million-dollar investment from BlueRun Ventures, had already quietly slipped onto the radar of top AI professionals globally.
Roasting rising academic stars—and roasting Yao Shunyu.
“Yao Shunyu’s citation growth rate is faster than a rocket—he probably built a ‘language agent’ that auto-cites his own papers every three seconds. With an h-index of 25 and over 21,000 citations, he’s Princeton University’s sole faculty member whose bibliography is longer than the Great Wall.”
![]()
Roasting academic legend Jitendra Malik.
“With an h-index of 185 and over 250,000 citations, Jitendra has reached the pinnacle of academic stardom—he’s essentially the ‘final boss’ in every computer vision PhD student’s literature review. I even suspect he no longer needs to submit papers: just sneeze beside a GPU, and a top-cited annual paper will spontaneously materialize.”
![]()
Roasting cross-domain titan Bill Gates.
“Bill Gates? The only person who turned ‘selling Windows’ into a billion-dollar business! As ‘Chief Executive Officer,’ you’ve mastered the art of convincing people you’re the real deal—while simultaneously dodging every software update. Remember, buddy—in Australia, even kangaroos want to jump over your ‘legacy’!”
![]()
Funny as it is, many industry heavyweights have already joined DINQ during its closed beta—including researchers from OpenAI. Several even proactively recommended DINQ on X (formerly Twitter).
![]()
![]()
But behind the memes, DINQ is tackling something profoundly serious.
According to founders Sam and Kelvin, LinkedIn-style keyword-matching search is hopelessly outdated in the AI era. True AI luminaries are often “invisible”: they don’t submit resumes, don’t network on career platforms—their intellectual fingerprints are scattered across arXiv papers, Hugging Face projects, and even late-night rants on X.
DINQ’s logic is simple: since you won’t step forward, we’ll deploy AI agents to “doxx” you like digital detectives. This isn’t rigid database lookup—it’s intelligent inference with deep technical understanding. Even if an HR’s request is vague—like “find a young person who solves character consistency in video generation”—the agent can instantly comb fragmented online traces to surface the “underwater” genius who’s never appeared on any job board.
In this nearly 20,000-word deep-dive conversation, they discuss not just how to help big tech companies recruit—but how to build a DINQ Card for millions of global AI developers through the technical philosophy of “Less Structure, More Intelligence.”
A radical pivot from architecture graduate to Alibaba DAMO Academy algorithm engineer—via self-study
Jane: To begin, please introduce yourself and your company in one sentence.
Gao Daiheng (Sam): DINQ is an AI-native talent intelligence platform for AI developers, researchers, and creators. We help them get discovered and connected to world-class opportunities by automatically analyzing their authentic achievements and influence. As for me—I was the first algorithm engineer to join Alibaba DAMO Academy (later Tongyi Lab) via open-source contributions.
Jane: Your background doesn’t appear to be in computer science initially—when did you decide to pivot, and why? After all, your entire professional trajectory has revolved around this field ever since.
Gao Daiheng (Sam): Correct—I majored in architecture as an undergraduate. My deliberate shift toward computer science began in 2017. At the time, China was experiencing its first major wave of AI entrepreneurship—opportunities were highly concentrated, and crucially, high-quality learning resources were finally abundant online, making it realistically feasible for non-CS majors to break into AI.
But for me personally, the deeper motivation wasn’t “chasing trends.” Rather, I’d already sensed my original path was hitting a dead end. I was a graduate student at Beijing University of Technology—if I stayed on that track, landing a job in Beijing paying ¥10,000/month would be extremely difficult, and even then, it would likely involve grueling, unsustainable overtime.
At the same time, I’d begun hearing insider perspectives: the sector’s structural health was questionable—especially long-term prospects for real estate–linked industries looked bleak. In that context, I started asking myself early: if this path is inevitably headed for “GG,” shouldn’t I proactively jump ship?
So ultimately, I combined macro-level trend analysis with deep personal reflection—and made a relatively radical yet rational decision: systematically teaching myself AI and computer science online. That became the foundation for everything that followed.
Jane: What was your entry point into computer science?
Gao Daiheng (Sam): Primarily through Andrew Ng’s MOOCs for systematic learning. Fundamentally, this field lacks canonical textbooks—everyone learns by consuming existing materials. So your background matters far less than your genuine interest in the domain.
Jane: Let’s talk about that highly acclaimed project you led at DAMO Academy. But first—was joining Alibaba your first or second role after deciding to focus on AI and algorithms?
Gao Daiheng (Sam): First correction: that project wasn’t done at Alibaba. Alibaba was my second job. Let me walk through the timeline: I graduated in 2018, but I’d already been contributing to open-source projects before graduation—starting with writing code for deep learning frameworks like TensorFlow.
Back then, I noticed a problem: when working on low-level infrastructure, few people understood what I’d actually built—or even what “I did.” It was hard to gain recognition in those days. But there was an upside: because of these contributions, domestic teams building deep learning frameworks—like Yuan Jinhui’s OneFlow—knew who I was. At the time, having over ten PRs on TensorFlow or PyTorch while based in mainland China was rare.
Then I asked myself: could I build something less infrastructural—something tangible and immediately understandable, requiring no lengthy explanations? Given my prior arts background, I thought visual or video domains might make my work more intuitive to others.
So after graduation, I joined a small startup. Outside work hours, I spent nearly every day on open-source projects—including DeepFaceLab, which emerged during this period.
Jane: That project received outstanding feedback. Was it solo work or a team effort?
Gao Daiheng (Sam): It was a multinational open-source collaboration. I recall its impact ranking second only to TensorFlow that year.
Jane: With such high-impact work, did you consider submitting to a top-tier conference?
Gao Daiheng (Sam): Yes—we submitted, but it got rejected. The content was highly controversial and sensitive at the time; academia wasn’t willing to take that risk. Later, I simply published it openly on arXiv.
Jane: Did this project’s positive reception solidify your confidence in joining DAMO Academy? Why didn’t you consider Meta or ByteDance instead?
Gao Daiheng (Sam): Core reason: DAMO let me stay deeply focused on video. Meta’s offer was for their “Red Team,” mainly handling defensive content moderation—reviewing large volumes of negative audiovisual material daily, which I felt would harm mental and physical well-being. ByteDance at the time focused more on audio/video codecs—less aligned with my specialized research direction.
Jane: DAMO is indeed more oriented toward cutting-edge R&D. You worked on several digital human projects there—could you share those experiences and insights gained?
Gao Daiheng (Sam): Over two to three years at DAMO, we experimented extensively—from pure tech to real-world deployment. For example, in the 2022 CCTV Spring Festival Gala, my colleagues and I co-developed a 3D tiger digital human for the livestream. Later, we contributed to the Winter Olympics’ 3D digital human project. Subsequently, I shifted focus to diffusion-model-based image generation—with our most successful project being Outfit Anyone, a virtual try-on system now generating ¥100–200 million annually for Alibaba Cloud.
Jane: You joined DAMO right before ChatGPT’s explosion—what was the internal atmosphere like? I’ve heard gossip from Alibaba friends that large models held an ambiguous status within Chinese tech giants back then—for instance, at the 2021–22 Yunqi Conference, Professor Yang Hongxia wasn’t originally slated to speak on LLMs; she stepped in last-minute. Objectively curious: what real shifts did you observe internally?
Gao Daiheng (Sam): Indeed—but first, let me clarify: our team wasn’t in the same reporting line as Professor Yang’s group, which focused more on text LLMs. I joined in 2020, and researchers like Luo Fuli and Lin Junyang—who later became prominent in LLMs—also joined around that time. I know many “Alibaba Star” hires, so I’m quite familiar with the situation.
Actually, Zuckerberg’s recent moves have effectively “sealed the deal” on the value of LLMs and talent. Look at whom big tech is now willing to pay huge sums to recruit—mostly young engineers who’ve built genuinely core technologies.
Jane: Absolutely—technical authority is returning to younger hands.
Gao Daiheng (Sam): Exactly. The underlying logic is fascinating: previously, research findings were scattered across communities and papers—but now everyone realizes that whoever can synthesize these dispersed breakthroughs into cohesive, functional systems holds the real power.
Jane: Returning to DAMO’s internal process: if you had an idea to implement, what was the workflow? Were success metrics based on publications or business impact?
Gao Daiheng (Sam): DAMO’s internal process was highly “bottom-up.” Team leads typically set only broad research directions—the rest was entirely up to us to explore. Management was flat, with minimal bureaucratic constraints. If you needed resources—compute, data labeling, interns—you could apply and get support.
In 2021–22, everyone was still feeling their way forward—not knowing where LLMs would ultimately land—so “reading papers for inspiration” was standard practice.
Jane: Got it. So there was no pressure for hard publication quotas—unlike SenseTime’s early explicit paper targets. Though both are research-oriented, Alibaba’s culture seems comparatively freer.
Gao Daiheng (Sam): Yes, exactly. That freedom created immense space for technical exploration.
Jane: When did the entrepreneurial idea first crystallize? Did it emerge after launching that open-source project—“I need to go out and build something, even if the exact direction isn’t fully clear yet”?
Gao Daiheng (Sam): Yes—that’s precisely when it began taking shape.
Jane: So why did you leave DAMO? Was it to explore open-source first, or were there other considerations? And who proposed the initial product concept—was it you or your co-founder?
Gao Daiheng (Sam): Initially, we had some abstract discussions. I firmly believed that the next-generation workplace community—especially one appealing to young people—must abandon the old playbook of “posting resumes and flaunting degrees.” It needed something radically different. Honestly, though, we debated for hours without landing on concrete specifics.
Early this year, I was in the U.S. My approach was simple: I’d experiment with Cursor (an AI-powered code editor) for “vibe coding”—building a fun micro-app to test the waters. Its core logic was razor-sharp: users input a name or Google Scholar link.
I know this demographic intimately—researchers’ first act upon entering the lab is often opening Google Scholar to check if their citation count rose.
Jane: Nailed the researcher pain point.
Gao Daiheng (Sam): Exactly. My feature was: paste the link, and AI delivers a “roast”—e.g., “Not enough first-author papers—step it up!” or “Why no top-conference submissions?” Pure humor and fun. Development cost was negligible—but post-launch, two unexpected discoveries emerged:
First, once models gained reasoning capability, they generated strikingly precise, abstract evaluations—roasting people with uncanny accuracy. People loved being roasted by AI—this “craving criticism” psychology was fascinating, whereas praise felt dull.
Second, after validating this “roast” tool, I realized its expansion potential was enormous. Based on this feedback, we dove deep into joint ideation—ultimately refining today’s product.
Jane: Understood—it grew organically from a tiny positive signal. Very interesting.
Sequoia’s hiring struggles revealed AI-era structural gaps in talent acquisition
Aaron: Kelvin, please introduce yourself briefly.
Sun Chenxin (Kelvin): My career has been singularly focused on HR and recruitment. A pivotal moment came when I joined Sequoia Capital, responsible for recruiting junior talent—including investors—for tech and consumer sectors. After Sequoia, I attempted several startups. My first venture closely mirrored our current business logic: during the mini-program boom, I sensed WeChat ecosystem (Moments, group chats) hiring efficiency was surpassing traditional platforms like Liepin and Boss Zhipin—so I built a recruitment mini-program.
Aaron: How did that venture fare?
Sun Chenxin (Kelvin): Quite dramatically. The product launched one week before the pandemic hit. While online growth was explosive—over 100 B-side companies spontaneously shared it in WeChat groups, and 100,000 resumes flooded in within two weeks—fundraising collapsed. Back then, people weren’t accustomed to virtual meetings—I couldn’t even meet investors face-to-face. After months of struggle, the venture quietly folded—a major regret. Later, I dabbled in cross-border e-commerce, but ultimately circled back to my “core domain.”
Aaron: Many wonder: what’s Sequoia’s actual bar for hiring junior investors?
Sun Chenxin (Kelvin): Criteria evolve yearly—but the core principle remains one phrase: “Absolute top performers among peers.” Sounds abstract, but perceptually: a 25-year-old must exude a palpably distinct aura. Background is irrelevant—journalists, PMs, coders all qualify. If you possess exceptional deep-thinking ability and intrinsic drive—clearly outpacing peers—you’re our target.
Aaron: From an HR lens, what’s the core challenge in hiring for consumer or tech firms?
Sun Chenxin (Kelvin): Recruiting for portfolio companies’ biggest hurdle is “lack of brand awareness.” Regardless of backing by Sequoia or Hillhouse, most candidates don’t know what these companies do. Low brand recognition is the single largest recruitment barrier.
By contrast, B2C firms hire easily—they advertise constantly. I remember recruiting for Pinduoduo: introductions flowed smoothly because everyone knew that earworm jingle. Pre-IPO, Shanghai was plastered with Pinduoduo ads—brand awareness was undeniable. But for B2B firms—or entirely novel, obscure frontier fields—achieving talent breakthroughs is extremely difficult because outsiders simply don’t know they exist.
Aaron: How many steps does hiring typically involve—from demand identification to offer issuance—and how long does it take?
Sun Chenxin (Kelvin): Conventionally, it starts with omnichannel sourcing: scanning domestic/international job boards, posting on WeChat Moments/groups, leveraging personal networks—even contacting people who “know the target talent” to identify key nodes. Premium options include headhunters. All channels get tested.
Within one to two weeks, unsuitable candidates are filtered out, leaving 3–5 profiles with full backgrounds and strong expressed interest (motivation). That’s already two weeks gone. Then interviews and offer negotiations follow—taking another one to two months to finalize. Add onboarding prep, another 3–4 months. So even under ideal conditions, filling a tough role takes a quarter—and many roles remain “unsolvable,” perpetually unfilled.
Aaron: From your professional perspective, how would you describe DINQ’s business in one sentence? What kind of product is it, really?
Sun Chenxin (Kelvin): Stripping away AI’s technical layer, I see it as the most efficient self-expression tool for every AI practitioner. Our personal homepage is itself a high-efficiency self-presentation medium.
From the recruiter’s side, our search engine is a higher-efficiency talent discovery tool. It essentially replaces the first two steps I described—saving recruiters at least two weeks of blind searching.
Aaron: When did you realize traditional resume submissions, LinkedIn profiles, and legacy hiring processes had become obsolete—and needed disruption—in this new AI arena?
Sun Chenxin (Kelvin): Though I haven’t directly recruited recently, I retain the label “someone who helps place talent.” Friends launching AI startups still ask: “Kelvin, can you connect me with top algorithm engineers or full-stack devs?”
That’s when I realized I’d “gone offline.” Previously, finding talent via 1st- or 2nd-degree connections was easy—but after this AI wave surged, I didn’t recognize a single person in the circle. It triggered anxiety: though I’m no longer in the field, I refuse to lose this professional identity.
I recognized an entirely new cohort emerging. I know many traditional-sector CTOs—but they’re outside this domain and can’t grasp its logic. This isn’t the era where “offering $2M/year gets you a traditional CTO to solve AI challenges.” Talents like Sam—and countless elite young professionals—exist beyond traditional visibility. We don’t even know where to find them. That was my crisis.
Aaron: How did you tackle this crisis?
Sun Chenxin (Kelvin): I began researching where they actually congregate. I consulted HR friends at LLM companies: “Where do you hunt?” Turns out, they manually scour GitHub and Google Scholar—LinkedIn rarely yields results. Even when found, they must dig through personal websites for contact info and email. Industry referrals are inefficient—helpful but reliant on “non-traditional channels.” So I adopted that approach too.
Aaron: So it’s fair to say your frustration with inefficient sourcing methods directly inspired this product?
Sun Chenxin (Kelvin): Yes. But frankly, I didn’t “build” this product—it was Sam who built it, and he showed me “this problem can be solved this way.” I was late to this realization.
Aaron: How did you two initially connect?
Sun Chenxin (Kelvin): Simply put: a friend asked me to find a cross-domain expert skilled in both trading and AI agents. I noticed a famous open-source project Sam mentioned earlier. On its paper, I saw a Chinese name—“Gao Daiheng”—and mobilized all resources to find someone who knew him. Eventually, an investor from a VC firm introduced us. Classic “old-school” method—leveraging recruitment networks built over years.
Aaron: What was your first impression of Sam? How did that evolve—and what prompted your decision to collaborate?
Sun Chenxin (Kelvin): Honestly, no particularly strong first impression. I contacted many similar tech elites then—routine outreach: “I have an opportunity; are you interested?” He naturally declined.
Initially, my understanding of both Sam and the AI field was shallow. The turning point came later: when he conceived the hiring-product idea and realized my expertise, he initiated deep discussions. That’s when my perception sharpened. Early on, we communicated exclusively online—never met—but clicked instantly.
I noticed his extraordinary learning agility—he pursued me relentlessly with hard, granular operational questions after learning I knew recruitment. Later, I learned he’d self-taught AI from scratch. I believe when self-learning ability reaches sufficient strength, it becomes a foundational habit—enabling breakthroughs across domains. So my core label for him is: exceptional self-learner.
Gao Daiheng (Sam): Thanks, Kelvin. My thinking was simple: after wrapping up that project, I wanted to explore new things. Assessing our respective strengths and core competencies, we realized we shared deep intuition about human traits, cognition, and mobility patterns. So I wondered: could we build around “people” as the central axis?
Once anchored there, recruitment was the most natural extension—and market demand was clearly massive. That’s why I consulted Kelvin extensively. Early conversations happened while I was still in the U.S.; Kelvin generously shared his profound HR industry insights. As dialogue deepened, we both felt this deserved scaling.
Aaron: So you aligned on the strategic direction from the outset—and the specific product form emerged from continuous “collision” between you two?
Gao Daiheng (Sam): Yes. Regarding product form, I hesitate to use “converged” now—because in today’s AI landscape, no platform can claim its product is fully mature. If technology and models were truly “converged,” companies wouldn’t need to spend hundreds of millions annually competing for top Chinese AI researchers.
Given the industry’s fluidity, we’ve achieved a phase-appropriate consensus: our current model aligns intuitively with youth sensibilities. How did it evolve? No shortcuts—just relentless, frequent engagement with young people and target users. That’s how we know precisely what they truly love.
![]()
Yang Jianchao vs. Zhou Chang
Product deep dive: How DINQ uses Agent reasoning to end AI talent search paralysis
Aaron: Traditional hiring relies on keyword matching. What dimensions does DINQ’s talent evaluation framework cover—and what’s the fundamental difference versus conventional approaches?
Gao Daiheng (Sam): Technically, we’re in an era of “deconstruction and remix”—where information complexity grows geometrically. This creates a classic mismatch: a candidate’s core label might be “R2” or “RAG,” but an HR’s query is “large image models” or “video generation.” With LinkedIn’s lexical search, mismatched terms mean that person may never surface.
We surveyed over 1,000 OpenAI researchers and found >50% don’t maintain LinkedIn profiles—or lack accounts entirely. Technical experts’ information lives in official blogs, forked papers, or niche forums—making B2B discovery extremely hard.
Aaron: So what’s your solution?
Gao Daiheng (Sam): Given extreme fragmentation—someone might host a project on Hugging Face, explain techniques on Twitter, publish papers on arXiv, and post conference posters on Xiaohongshu—we abandoned the “LinkedIn profile-centric” approach. Instead, we built a system centered on Agent invocation. We pre-process and embed top conferences and AI companies’ data. When queried, Agents dynamically fetch and reason across the entire web.
For example, Sora 2’s lead Chinese author is Li Liunian (Harold). Ask any generic LLM or Agent platform, and he’s unlikely to surface—data isn’t aligned. But our system identifies him precisely via his papers, GitHub, and social activity.
Aaron: Let’s delve deeper into “fragmentation.” Traditional hiring heavily relies on LinkedIn—but AI researchers’/engineers’ data is scattered across Google Scholar, GitHub, etc. Can you give a concrete example illustrating how severe this fragmentation is?
Sun Chenxin (Kelvin): Today’s pain point is: when HR receives a demand, business teams specify requirements with extreme granularity—e.g., “solve character consistency in video generation.” HR’s biggest problem? They literally don’t know where these people live.
Our tool lets HR dump all fragmented clues they hold—solving the “0 to 1” breakthrough. Previously, searching LinkedIn with vague phrases yielded zero results—because no one posts granular research outcomes there. That’s LinkedIn’s fatal flaw: low information density, mostly just school names.
If you’re hiring “Tsinghua/Peking University graduates,” LinkedIn works fine. But if you seek someone solving a specific technical problem, industry terminology isn’t standardized—and LinkedIn won’t surface them. Boss Zhipin or Liepin? Impossible—target audiences don’t even browse those platforms. This was previously “unsolvable.”
The old brute-force method was manual paper hunting—but expecting HR to read papers is unrealistic and outside their scope. Ultimately, it boiled down to word-of-mouth referrals or internal recommendations—highly inefficient.
Aaron: If I’m a Meta HR rep seeking an AI Scientist, what’s the typical flow on DINQ?
Gao Daiheng (Sam): We offer three modes:
First, you state a clear need in plain language—e.g., “Find NeurIPS 2025 oral presenters in [specific area], preferably with U.S. work visas.”
Second, if you have a JD (e.g., hosted on Greenhouse), just paste it—our system finds matches.
Third, if you already have a person—e.g., Candidate A is perfect but declines, or A is your employee—you can ask: “Find someone like A”—e.g., still PhD-ing, or actively exploring opportunities. Or “Find 95ers who left ByteDance, excelling in [area] and possibly open.” Or “Search SCI for such profiles.” All possible.
Sun Chenxin (Kelvin): Let me add context. Traditionally, professional HRs or top-tier headhunters conduct broad sourcing—yielding a 100-person “long list.” Then they contact and screen, primarily verifying two things: capability and willingness. Only those passing both advance. This forms a “short list” for hiring managers (partners, CTOs, etc.).
We effectively deliver the short list immediately—because the entire web-wide sourcing is handled by agents, eliminating manual time. Output is the short list. Then humans do what they excel at: direct communication and persuasion.
Aaron: I noticed your “compensation package” feature for candidates—even attracting professors from Stanford, Berkeley, and NYU to try it. How is this generated? I tested Yao Shunyu myself—he scored a $10M package.
Gao Daiheng (Sam): We leverage public data sources like Level.fyi to build a scoring model mapping talent level to compensation. Initially built for fun—no rigorous calibration—yet the response vastly exceeded expectations. Yao Shunyu’s $10M valuation? Surprisingly accurate.
Aaron: How does the AI robot perform layered analysis?
Gao Daiheng (Sam): Upon registration, we vectorize users’ resumes and social media. During search, intent recognition kicks in. E.g., querying “top conferences” auto-maps to CVPR/NeurIPS. Searching for “00s” triggers global inference. For hundreds of authors in a paper, manual screening is impossible—but Agents instantly rank by importance.
Aaron: Could bias arise—e.g., PhD students doing critical work but not publishing? Do you have mechanisms to correct such gaps?
Gao Daiheng (Sam): Excellent question. We plan to index arXiv content based on user queries—but haven’t implemented this yet.
Aaron: What’s current B2B user feedback like?
Gao Daiheng (Sam): Meta’s executive search team is already using it, and several overseas AI firms are co-building—they need highly multidimensional reference frameworks.
Sun Chenxin (Kelvin): Domestically, teams like Flow, Moonshot, Zhipu, and PixVerse are quietly trialing it. The most common feedback word? “Magical.” Previously, HR faced massive internal friction with vague demands; now, inputting a demand instantly renders candidates—making the target profile concrete. HRs exclaim: “Ah—these are the people I needed!” This leap from ambiguity to clarity is more valuable than merely finding people.
Aaron: Are current clients prioritizing seasoned scientists with big-tech experience—or fresh PhD grads?
Sun Chenxin (Kelvin): Both demands exist—but currently, the latter dominates. Senior scientists are prohibitively expensive. Also, “visible” celebrities are already known to HRs. Even for familiar names, our tool adds value: HRs often report, “I added this person on WeChat but forgot completely”—because remembering 5,000 contacts is humanly impossible.
Jane: Does this imply that AI-era hiring demands—especially for elite talent—have fundamentally shifted, rendering corporate HR’s existing mental models inadequate for today’s technical boundaries?
Sun Chenxin (Kelvin): Exactly. Consider ByteDance’s adjustment: they redeployed many employees who build algorithms or products—but aren’t necessarily “elite” technically—into full-time recruitment roles. Because they understand business and tech, they find better fits.
Current feedback shows ByteDance Seed and Flow teams’ experiments are highly effective. Their elite-recruiting teams include many with zero prior recruitment experience—all transferred from business lines.
But only mega-corps can afford such luxury. For most companies, hiring such talent for core work is already a win—let alone diverting them to recruitment. This “using a scalpel to slaughter chickens” model lacks scalability.
Jane: I recall knowing headhunters who, when big firms sought scientists in North America, could only rely on local specialists with deep scientist networks. Even then, candidate profiles remained vague—requiring individual meetings. The whole process resisted standardization—essentially “casting wide nets.” Demand-side shifts here are indeed the largest.
Sun Chenxin (Kelvin): Yes—that’s the reality.
This dialogue covers the product’s core technical moats and UX details. Without deleting any content, I’ve primarily enhanced coherence—transforming colloquial fragments into professionally polished narration—and elevated the technical rigor in Sam’s logic.
Aaron: Let me probe deeper: does your product actually read papers to identify content and match roles? How granular can your talent-tagging get?
Gao Daiheng (Sam): Such requests automatically trigger multi-dimensional matching: first, publicly impactful non-open-source outputs (e.g., Sora 2); second, popular Hugging Face projects or highly upvoted works; third, “underwater” contributors—e.g., papers from Sun Yat-sen or Soochow Universities.
Currently, for efficiency, we rely on abstract reading for precise matching. Full-text parsing would massively inflate token costs and per-person processing expenses—so full-text reading isn’t live yet.
Aaron: How many data sources are you integrated with?
Gao Daiheng (Sam): Roughly two to three dozen. Coverage includes Google Scholar, Medium, Twitter, etc. arXiv isn’t formally integrated yet—but it’s planned.
Aaron: Are top-conference data pre-processed?
Gao Daiheng (Sam): Yes—we pre-process conference data. Since conferences announce results en masse, we can directly ingest annual conference lists and author rosters.
Aaron: Technically, what’s the hardest part? Data scraping/cleaning/alignment—or privacy risk control?
Gao Daiheng (Sam): There are countless details—no single “hardest” part, because any weak link becomes a bottleneck. I’ll highlight three representative challenges:
1. Disambiguation: A classic academic search problem. Name collisions are rampant—ensuring correct entity linking is critical. Mis-linking ruins user experience.
2. Timeliness: E.g., an author moved from OpenAI to Meta, but our system still shows their old affiliation. How to dynamically update databases and sync in real time? Traditional platforms buckle under passive-update costs.
3. Agent Path Selection: Based on user needs, the system must decide where to search, how to minimize path length, and how deeply to drill down. This involves intricate trade-offs between depth-first and breadth-first strategies—and we continuously upgrade the model’s reading comprehension (Read) capability.
Aaron: For point #1, can you explain with a simpler example—e.g., distinguishing two Yao Shunyus with identical English names?
Gao Daiheng (Sam): Key disambiguation dimensions: uniqueness of Google Scholar ID, photo differences, educational background, and career trajectory. Combining these reliably separates them.
Aaron: For point #2 on timeliness—how do you capture real-time updates, given traditional methods rely on passively following people?
Gao Daiheng (Sam): The prerequisite is relevant info existing online. These technologists rarely use LinkedIn—instead maintaining personal websites. But these sites are wildly scattered (e.g., GitHub-hosted standalone pages)—if you don’t know the exact path, you can’t find them. Our advantage lies in knowing where they are—and pre-caching data to efficiently extract info from complex independent sites.
Aaron: Have you encountered tricky or unexpected cases in practice?
Gao Daiheng (Sam): Yes. Recently, I demoed for a friend: “Which of this person’s collaborators might be open to opportunities?” The result was spot-on—even though the person is a senior university professor. Another case: Su Jianlin (“Su Shen”) and I wanted to gauge our connection depth. Though we’ve never collaborated directly, the system traced intermediaries to map our relationship.
This reveals a fundamental truth: when model intelligence, reasoning power, and subpage crawling capabilities reach sufficient levels, “Less Structure” enables “More Intelligence.” You can trust the model’s judgment more.
Aaron: If a candidate hasn’t updated their personal website—but their latest paper notes a new institution/company—can you detect it?
Gao Daiheng (Sam): Yes. We aggregate web-wide information. Even if the person hasn’t updated their site, we capture new affiliations via their latest academic footprint.
Jane: Do media news reports serve as data sources?
Gao Daiheng (Sam): Centralized media often lags. Currently, social media is our most effective source—higher timeliness. Among centralized outlets, only a tiny fraction serve as auxiliary references.
Aaron: Privacy is inevitably involved in data. How do you define which data is permissible for evaluation—and which is off-limits?
Gao Daiheng (Sam): We strictly define privacy-sensitive data. Phone numbers and WeChat IDs are “intrusive” contact methods—we generally don’t provide them. Email is considered non-intrusive. In practice, personal websites rarely list private phones or WeChat IDs—we rely overwhelmingly on publicly accessible information.
Aaron: If someone opts out of being searchable on your platform, how do you handle it?
Gao Daiheng (Sam): Upon receiving a formal inquiry, we fully delete their data from our system—ensuring permanent removal.
Aaron: What’s your favorite DINQ feature?
Gao Daiheng (Sam): My favorite is Network. When you look up someone, you don’t just see them—you see their six closest collaborators. Click any node to reveal their full network. This lets you trace entire talent ecosystems—from paper co-authorship and GitHub contributions to shared team affiliations. It transforms talent search from “single-point lookup” to “network expansion”—smooth as silk on-platform.
Aaron: Personally, I found the Compare/PK feature quite engaging.
Gao Daiheng (Sam): Absolutely. PK started abstract—like red vs. blue in King of Fighters. Later, friends noted academics/open-source folks don’t equate “fewer stars or citations” with weakness—they joke, “You’re just older and unchivalrous.” So our current PK interface balances contenders evenly. Its purpose is injecting lightness into serious talent search—no ulterior motive.
![]()
Fei-Fei Li vs. Jia Deng
Beneath market & commerce: The efficiency war and value war in AI hiring
Aaron: From a market perspective, what’s DINQ’s relationship with traditional headhunters? Replacement or augmentation?
Sun Chenxin (Kelvin): Short-term: definitely augmentation. We drastically cut “sourcing” time for recruiters—but deep dialogue and persuasion still require humans. Companies choose simply: budget allows outsourcing to headhunters; otherwise, use tools internally. Sensitive roles often demand hands-on handling. Currently, we position as an efficient tool.
Long-term, DINQ will displace lower-tier headhunters—defined as those who can’t even write effective prompts for AI tools. My recent research found headhunters who literally can’t formulate their first query—these individuals face existential risk.
Aaron: Can you elaborate on cost structures? How do headhunters charge—and how does your pricing intersect?
Sun Chenxin (Kelvin): Globally, top headhunters charge 20–30% of candidate’s annual salary. Recruiting a $1M/year executive costs $200K–$300K. Domestic rates are slightly lower—typically 20–25%.
Our pricing isn’t finalized—but preliminary plans range $100–$300/month. Even if you “grind” daily on the platform, annual cost remains far cheaper than one headhunter engagement. Business owners calculate this instantly.
Aaron: Do you position yourselves as a “super AI recruiting assistant”—or future “AI headhunter”?
Sun Chenxin (Kelvin): Neither. I see DINQ as an AI-native professional social platform. Recruitment is just one expression of professional networking—others include finding collaborators, clients, or technical peers. E.g., an API vendor seeks developer customers; an AI motion designer seeks peers to exchange ideas. Our vision extends far beyond hiring.
Our positioning mirrors early LinkedIn: more efficient self-presentation and connection. Whether post-connection behavior is hiring or chatting—the platform accommodates both.
Gao Daiheng (Sam): Adding: our goal is a platform for all AI talent. “AI-native” means using AI to boost productivity by an order of magnitude. Currently, algorithm and dev productivity tools are maturest; design tools are reaching mass adoption. Future industries will undergo similar transformation—generating domain-specific tools and workflows.
This era is decentralized—favoring super-individuals—but individuals need channels to access better opportunities and people. We provide superior connection and reach. Future opportunities won’t rely on “people seeing posts and forwarding to friends”—but on agents autonomously analyzing opportunity platforms. This will happen—and requires a foundational platform.
So we let C-side users upload diverse social media to reflect comprehensiveness: resumes were for humans; AI-era resumes serve both AI and humans—subjectivity, richness, and indefinability manifest via photos, videos, and social feeds. As multimodal understanding improves, human profiling becomes more holistic.
From 2010’s Twitter/Microblog labeling humans as “human-readable,” to today’s continuous embedding—tomorrow’s personal development potential and capability boundaries may even become predictable and planable. That’s the platform’s greatest value. Day one, we attract users via core tech—currently, our matching engine.
Sun Chenxin (Kelvin): Recently, we ran a small-scale ad campaign—results wildly exceeded expectations.
First, occupational diversity was extreme. Sign-ups included IKEA’s Chief Scientist, Capital One’s Chief AI Engineer—and unexpected individuals: an Egyptian woman delivering AI motion-design services via Twitter; even an Egyptian user listing “football coach,” who turned out to be using AI for player performance analytics.
Second, geographic spread was vast. Despite micro-targeting, users spanned everywhere except the poles—from Egypt and the Middle East to India, Denmark, Italy—global AI mania is real. This far surpassed our initial “Bay Area + Haidian District” assumption.
Aaron: Future monetization: credits, subscriptions, or pay-per-result?
Sun Chenxin (Kelvin): Initial plan: credit-based search billing—keeping it simple. C-side remains free for now; we’ll wait for scale to observe behavior and core needs before finalizing. B-side adopts an Agent-tool-company model—selling credits.
Jane: Have you considered evolving the C-side into a community product?
Gao Daiheng (Sam): The product inherently supports community features. Think of it as “Linktree—but AI-powered, with chat.” We haven’t opened public feed posting yet—early-stage platforms lack sticky content there.
Jane: How would you frame the TAM for AI hiring to investors?
Sun Chenxin (Kelvin): We used to think AI-core practitioners numbered in the millions—but AI Users are the larger base: now exceeding 100 million globally. They all need global connections and opportunities. If our platform makes ultra-low-probability connections easy—the addressable market is infinite.
Gao Daiheng (Sam): Core platform value lies in AI intelligently serving users. We’ll soon transition to recommendation mode: as we learn more about users, suggestions grow increasingly precise. Future users won’t just find collaborators—they’ll find mentors, partners.
Jane: It’s a longer-term play. Short-term B2B—but long-term ceiling is higher and more adaptive.
Sun Chenxin (Kelvin): Correct. And our real business isn’t rigidly B2B/B2C—because every B is ultimately a C working, just using it more in professional contexts.
Gao Daiheng (Sam): Elevating further: core platform value is AI enabling superior service. Traditional platforms create value jointly with suppliers and users; but today, ChatGPT proves a single chatbox can be a platform—because the LLM itself provides intelligent service. Same for us: if intelligence is high enough to solve enough problems, you’re just an input box—and users will come.
We’ll quickly move to recommendation. Initially, lacking user insight, pushing people blindly is useless. As we learn more—and users intermingle across platform boundaries—recommendations improve exponentially.
Future users won’t just find collaborators—they’ll find mentors, partners. Pre-AI, optimal matching was impossible: reviewing all suitable mentors and evaluating them required impossible information access and processing efficiency. Today’s engines make it feasible. Our launch version includes “find my adviser”; partner-finding follows the same logic.
Sun Chenxin (Kelvin): Not joking. Many marriages happen at work—real demand. People even date on LinkedIn. Scale it enough, and it happens.
Jane: This truly appeals to VCs. For example, Sequoia—encouraging internal entrepreneurship—would hugely benefit from directly connecting to core talent via platform-to-platform links.
Sun Chenxin (Kelvin): Exactly.
Jane: How high is the barrier for traditional LinkedIn teams to replicate your data sources and architecture?
Gao Daiheng (Sam): Architecturally, our asynchronous search system is highly competitive. If open-sourced, 10,000 GitHub stars wouldn’t be hard.
Reproducing a basic version might take a top engineer five hours—but true moats lie in error judgment, path selection, and long-term parameter tuning. In AI, there are no textbook answers; without business logic intuition, tuning drags endlessly.
Aaron: Recent big-tech talent wars are intense—Tencent doubles salaries, Alibaba pushes internally. How do you view the AI hiring market over the next 2–3 years?
G
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














