时间:2024-03-15|浏览:253
用戶喜愛的交易所
已有账号登陆后会弹出下载
原标题:加密 AI 黑手党
原作者:@Rui,SevenX Ventures 投资人
原始出处:镜报
编译:凯特,火星财经
由 SevenX Ventures、MyShell 和 Jessy's Hacker House 联合主办的 ETHDenver2024 上的 House of Decentralized Artificial Intelligence 的评论
当人工智能在 web3 世界中快速发展时,区分真正的创新和叙事泡沫将变得更加困难。 在 ETHDenver 期间,我们邀请了 12 位最主流的 AI 黑手党来简要概述该项目的愿景、方法和场景,以了解他们如何撼动当今世界。
以下是一些重要问题以及我们黑手党的答案:
数据:
•数据配置:如何获取人工智能训练数据? @草
•数据源:如何保护数据源IP? @StoryProtocol
•数据一致性:如何保证模型所使用的某些数据? @时空
模型:
•开放经济:如何构建激励开放平台? @Bittensor,@Sentient
•模型一致性:如何证明模型结果没有被篡改? @Modulus 实验室,@Ora
基础设施:
•通用基础设施:如何将所有基础设施连接在一起? @仪式
表演:
人工智能代理:如何让代理变得智能、可组合、可拥有? @Future 原语、@Olas、@Myshell
完整的 YouTube 链接:https://www.youtube.com/playlist? 列表=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n
草
•原因:数据是所有人工智能训练的基础,但提取式把关使得获取高质量的训练数据变得困难。 可以从公共网络上抓取大量数据,但主要网站的常见做法是封锁商业数据中心。
•概述:Grass 是一种数据提供协议,可提供数据访问并使人工智能基础设施变得公平。
•工作原理:用户安装Chrome网络扩展程序,并使用其多余的计算和带宽来扫描互联网以搜索人工智能数据。 Grass 在全球运营着一个由近 100 万个网络抓取节点组成的网络。 使用该网络,它每天抓取超过 1 TB 的数据,然后对这些数据进行清理和处理以生成结构化数据集。
•其中:草节点目前已在全球190个国家运营。
•Youtube网站:https://www.youtube.com/watch?v=dwBlqwOimig&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=2
演讲者的 Twitter:@0xdrej
故事协议
原因:人工智能混音是非法且不可避免的。 人工智能发展的主要障碍是缺乏盈利能力以及对IP(知识产权)和内容创作者的归属。
概述:可组合的链上IP层允许创建者设置自主参与规则。 为全球知识产权奇迹增添易读性和流动性。
How to do it: Creators can purchase license NFTs to convert their static IP into programmable IP. Programmable IP is a layer that any program can read and write, consisting of nouns and verbs. The nouns include the data structure, associated IP metadata, and usage of ERC6551; the verbs include the modules, a set of features for the IP asset, such as licensing, revenue streams for derivative works, and global access. As soon as the derivatives are monetized, the proceeds flow back automatically.
Among them: Story protocol can be used for customization of license leasing, derivatives, regions, channels, validity period, revocability, transferability, attribution, etc.
Youtube website: https://www.youtube.com/watch?v=ymq1mhRSxTg&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=3&t=201s
Speaker’s Twitter: @jasonjzhao
空间与时间:
Reason: As LLM evolves, large companies can bias, modify, or tamper with data sets and parameters; it is important to cryptographically prove that untampered data sets ensure that the same data sets are used during LLM training. Additionally, SxT has been exploring ways to clean copyrighted data, extract data from verifiable vector databases, and inject hints into the inference process.
Overview: SxT is an indexer and ZK prover that proves SQL queries or vector searches against indexed data.
How: LLM providers can load their on-chain/off-chain training datasets into space and time, where the data is witnessed and threshold-signed using cryptographic commitments, which are later used to prove that the dataset was used for training. From there, litigators or auditors/examiners can ensure that the data set has not been tampered with after training. SxT built the GPU accelerator "Blitzar", which has achieved 2 million row table queries with 14 seconds verification time on a single GPU.
Among them: SxT allows users to create queries in plain text, within seconds OpenAI retrieves context from a vector search database and writes accurate SxT SQL that can be executed by the prover, who returns the proof in 4 seconds.
Youtube website: https://www.youtube.com/watch?v=cxT0vcU4mSo&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=4
Speaker Twitter: @chiefbuidl
Request sor
Reason: OpenAI’s goal is to monopolize control of artificial intelligence
Overview: Bittensor is a decentralized open source AI platform
How to use: The Bittensor network has 32 subnets. These subnets started with models but have now expanded to storage, computation, crawling, tracking and different artificial intelligence areas. $TAO incentivizes subnet builders to continuously improve models or projects. Verifiers rank the results of the subnet. The ranking changes the distribution of $TAO, and the smallest one will be kicked out of the network. This mechanism ensures that models compete to produce the best output, and the work that is most valuable to the collective is rewarded.
Among them: powerful applications have emerged, such as FileTAO for decentralized storage, Cortex TAO for OpenAI inference, and Nous Research for fine-tuning LLM; fractal research is used in applications such as decentralized text to video.
Youtube website: https://www.youtube.com/watch?v=xkXBDCaPMYk&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=6
有知觉
Reason: AGI construction is dangerous and faces the "threat of human extinction" and the risk of capitalist framework, so they inherently need encryption platforms; and encryption platforms need native killer applications.
Overview: Sentient is a sovereign incentive-driven artificial intelligence development platform.
How to use: Use a crowdsourcing approach that allows the community to coordinate and contribute training models to reduce costs, use open protocols to control inference, enable composability between models, and flow value back to network participants. Aggregating the power of web2 and web3, and leveraging tokens, Sentient will greatly incentivize developers to build trustless AGI.
Youtube website: https://www.youtube.com/watch?v=1fbwIGG7PV8&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=10
模数实验室
Why: As the future of AGI becomes unstoppable, we need to prove that AI results are accountable and secure, are generated by a certified model, not manipulated, and do not rely on a trusted central authority Good behavior by an authority.
Overview: Modulus built a dedicated AI ZK prover "Remainder" to provide AI capabilities to dapps at a fraction of the cost.
How to use: There is no point in using a modern ZK proof system as it is about 10,000 to 100,000 times more expensive compared to non-verifiable AI output. Modulus built a custom prover for AI inference that only showed ~180x overhead.
Among them: One noteworthy implementation is Upshot, which raises trust issues due to Upshot’s complex evaluation mode that can only be performed off-chain. However, Upshot can send valuable AI evaluations to Modulus every hour, and Modulus can generate "correctness proofs" for the AI calculations, aggregate them, and send them to Ethereum for final verification.
Youtube website: https://www.youtube.com/watch?v=4JRh2eeZCO0&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20n&index=7
Speaker Twitter: @realDanielShorr
Not
Reason: Artificial intelligence models cannot be run on-chain, thousands of computers will perform an inference. On-chain verification results are feasible, but as model size increases, the cost of ZKML grows exponentially, necessitating a linear growth cost - OPML.
Overview: Ora is an on-chain AI oracle that uses OPML for AI models of any scale.
How to use: Use oracles to delegate calculations to off-chain nodes. Users initiate transactions from smart contracts via prompts and named models. The OAO contract delegates tx to the OPML node to perform inference, generate fraud proofs and submit the verifier to ORO. The verified results are returned to the transaction initiator. But OPML still needs the help of ZKP to achieve input/output privacy and instant finality. Additionally, ZK oracles from ORA can generate storage proofs for OPML, so there is no need to re-execute OPML when reusing it.
Now using ORA, we can implement Stable Diffusion, 7B-LLaMA, on the Ethereum mainnet. ORA can support AI-managed DAO, AIGC NFT (such as EIP7007), thereby enhancing model ownership.
Youtube website: https://www.youtube.com/watch?v=i2Zkz45AGr4&list=PLFRYxG8q7EY6SgJHzEefMEq-VyrhzK20