国产精品亚洲专区无码唯爱网_久久免费观看国产精品88av_中文字幕精品亚洲无线码二区_99rv精品视频在线播放

<tr><label></label></tr>

    <sup id="P80hT"></sup>

      1. <noframes>
      2. <sup><tt></tt></sup>
        1. Welcome to Shenzhen Deren Manufacturing Co.Ltd
          Deren Precision Manuracturing Co.Ltd
          Focus on Custom Parts and Industrial Blades.
          Fine products,Craftmen service,10 years precision manufacturing.
          15814001449
          Hotline&Wechat

          News

          Contact Us

          You are here:Home >> News >> Industry information...

          Industry information

          Sora come out. Ai text to vedio bright our eyes.

          Time:2024-02-21 Views:13594
          1、 Introduction to Sora‘s Concept
          On February 16, 2024, OpenAI released the large modeling tool for text to video, Sora (using natural language to describe and generate videos). Once this news was released, global social media platforms and the entire world were once again shocked by OpenAI. The height of AI videos has been suddenly raised by Sora. It should be noted that cultural video tools such as Runway Pika are still breaking through the coherence of a few seconds, while Sora can directly generate a 60 second long one shot to the end video. It should be noted that Sora has not yet officially released, so this effect can already be achieved.
          The name Sora comes from the Japanese word for "sky" (そら sora), meaning "sky", to indicate its infinite creative potential.
          The advantage of Sora compared to the aforementioned AI video models is that it can accurately present details, understand the existence of objects in the physical world, and generate characters with rich emotions. Even this model can generate videos based on prompts, still images, and even fill in missing frames in existing videos.
          2、 The implementation path of Sora
          The significance of Sora lies in its once again pushing AIGC‘s upper limit in AI driven content creation. Prior to this, text models such as ChatGPT had already begun to assist in content creation, including the generation of illustrations and visuals, and even the use of virtual humans to create short videos. Sora, on the other hand, is a large model that focuses on video generation. By inputting text or images, videos can be edited in various ways, including generation, connection, and expansion. It belongs to the category of multimodal large models. This type of model has been extended and expanded on the basis of language models such as GPT.
          Sora uses a method similar to GPT-4 to manipulate text tokens to process video patches. The key innovation lies in treating video frames as patch sequences, similar to word tokens in language models, enabling them to effectively manage various video information. By combining text conditions, Sora is able to generate contextually relevant and visually coherent videos based on text prompts.
          In principle, Sora mainly achieves video training through three steps. Firstly, there is a video compression network that reduces the dimensionality of videos or images into a compact and efficient form. Next is spatiotemporal patch extraction, which decomposes the view information into smaller units, each containing a portion of the spatial and temporal information in the view, so that Sora can perform targeted processing in subsequent steps. Finally, video generation is achieved by decoding and encoding input text or images, and the Transformer model (i.e. ChatGPT basic converter) decides how to convert or combine these units to form the complete video content.
          Overall, the emergence of Sora will further promote the development of AI video generation and multimodal large models, bringing new possibilities to the field of content creation.
          3、 Sora‘s 6 Advantages
          The Daily Economic News reporter sorted out the report and summarized six advantages of Sora:
          (1) Accuracy and diversity: Sora can convert short text descriptions into high-definition videos that grow up to 1 minute. It can accurately interpret the text input provided by users and generate high-quality video clips with various scenes and characters. It covers a wide range of themes, from characters and animals to lush landscapes, urban scenes, gardens, and even underwater New York City, providing diverse content according to user requirements. According to Medium, Sora can accurately explain long prompts of up to 135 words.
          (2) Powerful language understanding: OpenAI utilizes the recapping technique of the Dall · E model to generate descriptive subtitles for visual training data, which not only improves the accuracy of the text but also enhances the overall quality of the video. In addition, similar to DALL · E 3, OpenAI also utilizes GPT technology to convert short user prompts into longer detailed translations and send them to video models. This enables Sora to accurately generate high-quality videos according to user prompts.
          (3) Generate videos from images/videos: Sora can not only convert text into videos, but also accept other types of input prompts, such as existing images or videos. This enables Sora to perform a wide range of image and video editing tasks, such as creating perfect loop videos, converting static images into animations, and expanding videos forward or backward. OpenAI presented a demo video generated from images based on DALL · E 2 and DALL · E 3 in the report. This not only proves Sora‘s powerful capabilities, but also demonstrates its infinite potential in the fields of image and video editing.
          (4) Video extension function: Due to the ability to accept diverse input prompts, users can create videos based on images or supplement existing videos. As a Transformer based diffusion model, Sora can also expand videos forward or backward along the timeline.
          (5) Excellent device compatibility: Sora has excellent sampling capabilities, ranging from 1920x1080p in widescreen to 1080x1920 in portrait, and can easily handle any video size between the two. This means that Sora can generate content that perfectly matches its original aspect ratio for various devices. Before generating high-resolution content, Sora can quickly create content prototypes at a small size.
          (6) Consistency and continuity between scenes and objects: Sora can generate videos with dynamic perspective changes, and the movement of characters and scene elements in three-dimensional space appears more natural. Sora is able to handle occlusion issues well. One problem with existing models is that when objects leave the field of view, they may not be able to track them. By providing multiple frame predictions at once, Sora ensures that the subject of the image remains unchanged even when temporarily out of view.
          4、 Disadvantages of Sora
          Although Sora is very powerful, OpenAI Sora has certain problems in simulating physical phenomena in complex scenes, understanding specific causal relationships, handling spatial details, and accurately describing events that change over time.
          In this video generated by Sora, we can see that the overall picture has a high degree of coherence, with excellent performance in terms of image quality, details, lighting, and color. However, when we observe carefully, we will find that the legs of the characters in the video are slightly twisted, and the movement of the steps does not match the overall tone of the picture.
          In this video, it can be seen that the number of dogs is increasing, and although the connection is very smooth during this process, it may have deviated from our initial requirements for this video.
          (1) Inaccurate simulation of physical interaction:
          The Sora model is not precise enough in simulating basic physical interactions, such as glass breakage. This may be because the model lacks sufficient examples of such physical events in the training data, or the model is unable to fully learn and understand the underlying principles of these complex physical processes.
          (2) Incorrect change in object state:
          When simulating interactions involving significant changes in object state, such as eating food, Sora may not always accurately reflect the changes. This indicates that the model may have limitations in understanding and predicting the dynamic process of object state changes.
          (3) Incoherence in long-term video samples:
          When generating long duration video samples, Sora may produce incoherent plots or details, which may be due to the model‘s difficulty in maintaining contextual consistency over long time spans.
          (4) The sudden appearance of an object:
          Objects may appear in videos for no reason, indicating that the model still needs to improve its understanding of spatial and temporal continuity.
          Here we need to introduce the concept of "world model"
          What is the world model? Let me give an example.
          In your memory, you know the weight of a cup of coffee. So when you want to pick up a cup of coffee, your brain accurately predicts how much force should be used. So, the cup was picked up smoothly. You didn‘t even realize it. But what if there happens to be no coffee in the cup? You will use a lot of force to grab a very light cup. Your hand can immediately feel something wrong. Then, you will add a note to your memory: the cup may also be empty. So, the next time you make a prediction, you won‘t be wrong. The more things you do, the more complex world models will form in your brain for more accurate prediction of the world‘s reactions. This is the way humans interact with the world: the world model.
          Videos generated with Sora may not always leave marks when bitten. It can also go wrong at times. But this is already very powerful and terrifying. Because "remember first, predict later" is the way humans understand the world. This mode of thinking is called the world model.
          There is a sentence in Sora‘s technical documentation:
          Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world
          Translated:
          Our results indicate that expanding video generation models is a promising path towards building a universal physical world simulator.
          The meaning is that what OpenAI ultimately wants to do is not a tool for "cultural videos", but a universal "physical world simulator". That is the world model, modeling the real world.
        2. Previous:Nothing
        3. Next:Made in China, we are on road, just begining.??2024/01/05
        4. 15814001449
          Hotline&Wechat
          Address: 1st Floor, No. 67, Langkou Industrial Zone, Dalang Street, Longhua District, Shenzhen
          hcuEU 国产精品亚洲专区无码唯爱网_久久免费观看国产精品88av_中文字幕精品亚洲无线码二区_99rv精品视频在线播放
          <span id="nj8jr"><optgroup id="nj8jr"></optgroup></span>

          <rt id="nj8jr"><small id="nj8jr"><strike id="nj8jr"></strike></small></rt>

            国产精品久久毛片a| 捆绑紧缚一区二区三区视频| 日韩欧美色综合| 国产综合成人久久大片91| 亚洲精品v日韩精品| 精品少妇一区二区三区免费观看 | 91片在线免费观看| 国产乱子轮精品视频| 琪琪一区二区三区| 亚洲制服欧美中文字幕中文字幕| 国产婷婷色一区二区三区在线| 欧美日韩一区精品| 日本韩国欧美一区| gogo大胆日本视频一区| 韩国一区二区三区| 激情亚洲综合在线| 久久精品久久99精品久久| 婷婷中文字幕一区三区| 午夜影院久久久| 亚洲综合成人在线视频| 欧美成人伊人久久综合网| 日本韩国精品在线| 99久久精品免费看国产| 成人国产精品免费观看视频| 国产福利一区二区三区在线视频| 免费成人美女在线观看.| 日韩国产精品大片| 日韩高清在线电影| 日本成人在线一区| 美脚の诱脚舐め脚责91 | 欧美激情一区二区三区四区 | 欧美主播一区二区三区| 色88888久久久久久影院按摩 | 在线观看免费一区| 在线观看欧美精品| 欧美日韩美少妇 | 91麻豆高清视频| 在线亚洲精品福利网址导航| 精品视频色一区| 成人精品视频一区| 国产suv精品一区二区三区| 处破女av一区二区| 色婷婷精品大视频在线蜜桃视频 | 99久久精品国产精品久久| 91免费精品国自产拍在线不卡| 一本大道综合伊人精品热热| 欧美精品在线视频| 日韩欧美卡一卡二| 国产精品视频第一区| 亚洲一区二区三区小说| 麻豆精品视频在线观看免费| 国产乱码一区二区三区| 91久久线看在观草草青青| 欧美一区二区三区四区在线观看 | 日韩一级在线观看| 国产精品久久久久久久久搜平片| 亚洲国产日韩a在线播放性色| 久久99国产乱子伦精品免费| 国产一区二区三区免费播放 | 日韩欧美成人一区二区| 国产精品国产三级国产普通话99| 日韩高清一区在线| 99国产精品久久| 精品国一区二区三区| 亚洲精品乱码久久久久久久久| 麻豆极品一区二区三区| 色系网站成人免费| 国产亚洲一区二区三区四区| 日韩av在线播放中文字幕| 91在线无精精品入口| 国产亚洲综合色| 美女高潮久久久| 欧美日本在线看| 亚洲在线中文字幕| 日本高清不卡在线观看| 国产精品久久久爽爽爽麻豆色哟哟| 日本午夜精品一区二区三区电影 | 亚洲成av人在线观看| 91麻豆精品在线观看| 国产精品久久久久久妇女6080 | 色猫猫国产区一区二在线视频| 亚洲精品一区二区三区精华液| 亚洲丰满少妇videoshd| 91黄色在线观看| 亚洲永久免费av| 91美女视频网站| 亚洲欧洲美洲综合色网| 99视频一区二区| 中文字幕人成不卡一区| www.日本不卡| 亚洲精品第1页| 99re66热这里只有精品3直播 | 男人的j进女人的j一区| 欧美肥妇free| 久久97超碰国产精品超碰| 精品免费日韩av| 国产成人av网站| 中文字幕一区二区三区乱码在线| 91在线精品一区二区| 亚洲自拍另类综合| www.久久精品| 久久久影院官网| 成人美女在线观看| 亚洲最新视频在线观看| 欧美一区二区私人影院日本| 精品一区二区免费看| 欧美激情一区二区三区四区| 91在线精品一区二区| 午夜在线电影亚洲一区| 日韩欧美一级在线播放| 国产91精品欧美| 一区二区三区在线免费| 欧美mv日韩mv国产网站| 波多野结衣的一区二区三区| 亚洲曰韩产成在线| 久久精品一区蜜桃臀影院| 99re视频精品| 久久精品国产一区二区三| 中文字幕中文字幕在线一区| 欧美日韩激情在线| 国产91丝袜在线观看| 亚洲午夜激情av| 国产视频在线观看一区二区三区| 91香蕉视频黄| 国产麻豆精品theporn| 亚洲综合色在线| 中文字幕高清一区| 欧美一区二区精品久久911| 国产成人综合自拍| 伊人婷婷欧美激情| 欧美成人艳星乳罩| 欧美性大战久久久久久久| 丰满岳乱妇一区二区三区| 日韩中文字幕1| 亚洲午夜一区二区| 亚洲天堂免费看| 中文字幕乱码日本亚洲一区二区 | 欧美日韩国产123区| 一本久久a久久精品亚洲| 国产成人av影院| 国产精品白丝av| 免费观看91视频大全| 亚州成人在线电影| 亚洲一区在线观看免费| 日韩一区有码在线| 国产精品久久久久久久久久久免费看 | 成人丝袜高跟foot| 亚洲国产成人精品视频| 国产亚洲精品aa| 久久精品免视看| 国产调教视频一区| 国产无遮挡一区二区三区毛片日本 | 欧美电影免费观看高清完整版 | 欧美日韩激情在线| 欧美图区在线视频| 欧美日韩国产一二三| 欧美美女喷水视频| 91精品国产一区二区三区蜜臀| 欧美猛男gaygay网站| 欧美日韩在线播放| 欧美一区二区三区日韩| 精品免费日韩av| 欧美激情在线一区二区三区| 国产精品沙发午睡系列990531| 国产精品视频一二三| 亚洲精品伦理在线| 日韩精品亚洲一区| 狠狠色狠狠色综合| 东方欧美亚洲色图在线| 色婷婷亚洲精品| 日韩一级黄色大片| 久久九九国产精品| 中文字幕中文乱码欧美一区二区| 欧美日韩国产成人在线免费| 91精品国产免费| 欧美国产精品专区| 亚洲乱码国产乱码精品精的特点| 午夜精品福利在线| 国产麻豆精品视频| 欧美伊人久久久久久久久影院| 日韩欧美国产一区二区三区| 国产精品拍天天在线| 亚洲成人av电影在线| 国产成人亚洲综合色影视| 在线免费观看不卡av| 久久丝袜美腿综合| 一区二区三区**美女毛片| 久久99精品一区二区三区| 91香蕉国产在线观看软件| 欧美电影免费观看完整版| 亚洲欧美日韩一区二区三区在线观看| 丝袜亚洲另类丝袜在线| 99这里只有久久精品视频| 欧美一级理论片| 亚洲午夜电影在线观看| 成人免费看视频| 欧美大黄免费观看| 午夜欧美电影在线观看| 丁香一区二区三区| 精品国产第一区二区三区观看体验|

            <tr><label></label></tr>

              <sup id="P80hT"></sup>

                1. <noframes>
                2. <sup><tt></tt></sup>