top of page
Search
oliviapriest636a5g

Call To Arms V1 100-CODEX



By downloading and using Visual Studio Code, you agree to the license terms and privacy statement. VS Code automatically sends telemetry data and crash dumps to help us improve the product. If you would prefer not to have this data sent please go see How to Disable Crash Reporting to learn how to disable it.


"Notice how Visual Studio automatically will suggest that I put in a last name, which is exactly what I want in this case," Kristensen said. "I want the LastName property to be inserted for me right here. I also want the full name, so let's hit enter again, and this time it suggests "age," which could be the accurate one, but I want full name so I'm just going to pretend it's not showing me anything and just start typing 'public string FullName,' and notice here I can just hit tab and I get my full name and it understands that FullName is a product of FirstName and LastName. This is really, really fantastic.




Call to Arms v1 100-CODEX




"I'm going to create an AddData method, so notice that based on the name RemoveData, you know it understands what remove means and add, and so it suggests that I created a method called Add, but what was crazy was that it knew what that would do inside the method, right. So it knew that it should not take the list and remove something, it should add to that list, so this is just absolutely fantastic."


"So I'm just writing here in plain English what it is that I would like to have happen, and now I can hit enter and Visual Studio will automatically suggest what that code might look like. So it understands what I'm saying in English in the code comment and can translate that into something that might be what I want. So this is absolutely fantastic. So this is the AI engine that is built upon a huge data set and using machine learning. It's able to take the context that the AI engine is aware of and pair that up with the big machine learning model of what do people do in situations like this. That's basically what the machine learning model knows, and when we pair those up together, Visual Studio can do amazing things like this."


That question was answered by co-presenter Aaron Yim of the Visual Studio AI engine team. "So the GitHub Copilot team, along with the IntelliCode team, are interested in kind of the same problem space," Yim said. "Our products are completely separate, and I think the key difference here is that GitHub's Copilot runs -- requires kind of a server to inference against -- whereas IntelliCode's inline completions and suggestions -- so all of our features -- work entirely locally on your machine. So none of your code will leave your machine if you're using IntelliCode, and so this is going to work on a plane if you'd like."


Kristensen replied: "So that's a big fundamental difference in that you're 100 percent local. But the the big machine learning model and the all the training data and all this sort of stuff -- does that then ship with Visual Studio that locally Visual Studio is able to to query, or how does that work?"


More discussion -- including more insights from Microsoft -- ensued on a Reddit thread some nine months ago titled "Visual Studio 2022 IntelliSense is so good it's almost creepy" [as one commenter pointed out, there's some confusion between IntelliSense and IntelliCode]. Here, one comment from "Aaron from IntelliCode team" reads: "The underlying tech between Copilot and IntelliCode is different. Copilot uses CODEX model and cloud inferencing to generate whole functions/tests at once. IntelliCode uses GPT-C model inferenced locally, will only ever generate up to a whole line of code at once."


Another comment in that thread reads: "Yep, in 2022 they added a copilot-like functionality. It shows you a code prediction and you can accept it by pressing tab. It doesn't make huge code predictions, but it predicts things like method call arguments with a respectable amount of success, for a first version at least. It makes things like manually writing a bunch of props so much faster. I used to just copy paste get; set; because it keeps cramping up my hands, and the prop snippet is wonky, now I just tab it off."


However, not everyone agrees that Visual Studio's AI engine is so amazing as to earn several "creepy" characterizations floating around out there, such as in the aforementioned Reddit thread. For example, one comment on that thread said: "I don't know why answers like this gets downvoted, but it's true 100 percent. IntelliSense (and Copilot) are great for boilerplate code, and that is pretty much it. Personally, I find it very useful when I need to map some objects, write simple null checks and stuff. But as soon as you start really doing the work, it's zero. Really, it's not creepy, and we are safe for at least another 50 years. You as a developers should really know that this so called AI is nothing but a bunch of almost random numbers, and their use cases are so so limited. So relax please, it's handy but not creepy."


Feature-wise, AV1 is specifically designed for real-time applications (especially WebRTC) and higher resolutions (wider color gamuts, higher frame rates, UHD) than typical usage scenarios of the current generation (H.264) of video formats, where it is expected to achieve its biggest efficiency gains. It is therefore planned to support the color space from ITU-R Recommendation BT.2020 and up to 12 bits of precision per color component.[36] AV1 is primarily intended for lossy encoding, although lossless compression is supported as well.[37]


The "TrueMotion" predictor was replaced with a Paeth predictor which looks at the difference from the known pixel in the above-left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with up to 8 dominant colors, such as some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (cfl).[42] In order to reduce visible boundaries along borders of inter-predicted blocks, a technique called overlapped block motion compensation (OBMC) can be used. This involves extending a block's size so that it overlaps with neighboring blocks by 2 to 32 pixels, and blending the overlapping parts together.[46]


The H.264 name follows the ITU-T naming convention, where Recommendations are given a letter corresponding to their series and a recommendation number within the series. H.264 is part of "H-Series Recommendations: Audiovisual and multimedia systems". H.264 is further categorized into "H.200-H.499: Infrastructure of audiovisual services" and "H.260-H.279: Coding of moving video".[10] The MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG, where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4. The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard with names such as H.264/AVC, AVC/H.264, H.264/MPEG-4 AVC, or MPEG-4/H.264 AVC, to emphasize the common heritage. Occasionally, it is also referred to as "the JVT codec", in reference to the Joint Video Team (JVT) organization that developed it. (Such partnership and multiple naming is not uncommon. For example, the video compression standard known as MPEG-2 also arose from the partnership between MPEG and the ITU-T, where MPEG-2 video is known to the ITU-T community as H.262.[11]) Some software programs (such as VLC media player) internally identify this standard as AVC1.


The next major feature added to the standard was Scalable Video Coding (SVC). Specified in Annex G of H.264/AVC, SVC allows the construction of bitstreams that contain layers of sub-bitstreams that also conform to the standard, including one such bitstream known as the "base layer" that can be decoded by a H.264/AVC codec that does not support SVC. For temporal bitstream scalability (i.e., the presence of a sub-bitstream with a smaller temporal sampling rate than the main bitstream), complete access units are removed from the bitstream when deriving the sub-bitstream. In this case, high-level syntax and inter-prediction reference pictures in the bitstream are constructed accordingly. On the other hand, for spatial and quality bitstream scalability (i.e. the presence of a sub-bitstream with lower spatial resolution/quality than the main bitstream), the NAL (Network Abstraction Layer) is removed from the bitstream when deriving the sub-bitstream. In this case, inter-layer prediction (i.e., the prediction of the higher spatial resolution/quality signal from the data of the lower spatial resolution/quality signal) is typically used for efficient coding. The Scalable Video Coding extensions were completed in November 2007.


The next major feature added to the standard was Multiview Video Coding (MVC). Specified in Annex H of H.264/AVC, MVC enables the construction of bitstreams that represent more than one view of a video scene. An important example of this functionality is stereoscopic 3D video coding. Two profiles were developed in the MVC work: Multiview High profile supports an arbitrary number of views, and Stereo High profile is designed specifically for two-view stereoscopic video. The Multiview Video Coding extensions were completed in November 2009.


On October 30, 2013, Rowan Trollope from Cisco Systems announced that Cisco would release both binaries and source code of an H.264 video codec called OpenH264 under the Simplified BSD license, and pay all royalties for its use to MPEG LA for any software projects that use Cisco's precompiled binaries, thus making Cisco's OpenH264 binaries free to use. However, any software projects that use Cisco's source code instead of its binaries would be legally responsible for paying all royalties to MPEG LA. Target CPU architectures include x86 and ARM, and target operating systems include Linux, Windows XP and later, Mac OS X, and Android; iOS was notably absent from this list, because it doesn't allow applications to fetch and install binary modules from the Internet.[58][59][60] Also on October 30, 2013, Brendan Eich from Mozilla wrote that it would use Cisco's binaries in future versions of Firefox to add support for H.264 to Firefox where platform codecs are not available.[61] Cisco published the source code to OpenH264 on December 9, 2013.[62] 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page