These builtins targets on rvv 1.0-draft and trying to document rvv_intrinsics programming model.
Please see rvv-intrinsic-rfc.md
Keep this chapter none to aligned to riscv-v-spec chapters
Keep this chapter none to aligned to riscv-v-spec chapters
Please see rvv-intrinsic-rfc.md
- vsetvli
- vsetvl
Reinterpret the contents of a data as a different type, without changing any bits and generating any RVV instructions.
These utility functions help users to truncate or extent current LMUL under same SEW regardless of vl, it won't change content of vl register.
enum RVV_CSR {
RVV_VSTART = 0,
RVV_VXSAT,
RVV_VXRM,
RVV_VCSR,
};
unsigned long vread_csr(enum RVV_CSR csr);
void vwrite_csr(enum RVV_CSR csr, unsigned long value);
- vle<eew>.v
- vse<eew>.v
- vlse<eew>.v
- vsse<eew>.v
- vlxei<eew>.v
- vsxei<eew>.v
- vsuxei<eew>.v
- vle<eew>ff.v
- The unit-stride fault-only-first load instruction is used to vectorize loops with data-dependent exit conditions (while loops). These instructions execute as a regular load except that they will only take a trap on element 0. If an element > 0 raises an exception, that element and all following elements in the destination vector register are not modified, and the vector length vl is reduced to the number of elements processed without a trap.
- vlsege<eew>.v
- vssege<eew>.v
- vlssege<eew>.v
- vsssege<eew>.v
- vlxsegei<eew>.v
- vsxsegei<eew>.v
- vamoswapei<eew>.v
- vamoaddei<eew>.v
- vamoxorei<eew>.v
- vamoandei<eew>.v
- vamoorei<eew>.v
- vamominei<eew>.v
- vamomaxei<eew>.v
- vamominuei<eew>.v
- vamomaxuei<eew>.v
Keep this chapter none to aligned to riscv-v-spec chapters
Keep this chapter none to aligned to riscv-v-spec chapters
Keep this chapter none to aligned to riscv-v-spec chapters
- vadd.{vv,vx,vi}
- vsub.{vv,vx}
- vrsub.{vx,vi}
- vneg.v
- vwaddu.{vv,vx,wv,wx}
- vwsubu.{vv,vx,wv,wx}
- vwadd.{vv,vx,wv,wx}
- vwsub.{vv,vx,wv,wx}
- vwcvt.x.x.v
- vwcvtu.x.x.v
- vzext.vf{2,4,8}
- vsext.vf{2,4,8}
- vadc.{vvm,vxm,vim}
- vmadc.{vvm,vxm,vim}
- vsbc.{vvm,vxm}
- vmsbc.{vvm,vxm}
- vand.{vv,vx,vi}
- vxor.{vv,vx,vi}
- vor.{vv,vx,vi}
- vnot.v
- vsll.{vv,vx,vi}
- vsrl.{vv,vx,vi}
- vsra.{vv,vx,vi}
- A full complement of vector shift instructions are provided, including logical shift left, and logical (zero-extending) and arithmetic (sign-extending) shift right.
- vnsra.{vv,vx,vi}
- vnsrl.{vv,vx,vi}
- vncvt.x.x.w
- vmseq.{vv,vx,vi}
- vmsne.{vv,vx,vi}
- vmsltu.{vv,vx,vi}
- vmslt.{vv,vx,vi}
- vmsleu.{vv,vx,vi}
- vmsle.{vv,vx,vi}
- vmsgtu.{vv.vx,vi}
- vmsgt.{vv.vx,vi}
- vminu.{vv,vx}
- vmin.{vv,vx}
- vmaxu.{vv,vx}
- vmax.{vv,vx}
- vmul.{vv,vx}
- vmulh.{vv,vx}
- vmulhu.{vv,vx}
- vmulhsu.{vv,vx}
- vdivu.{vv,vx}
- vdiv.{vv,vx}
- vremu.{vv,vx}
- vrem.{vv,vx}
- vwmul.{vv,vx}
- vwmulu.{vv,vx}
- vwmulsu.{vv,vx}
- vmacc_{vv,vx}
- vnmsac_{vv,vx}
- vmadd_{vv,vx}
- vnmsub_{vv,vx}
- vwmaccu.{vv,vx}
- vwmacc.{vv,vx}
- vwmaccsu.{vv,vx}
- vwmaccus.{vv,vx}
- vmerge.{vvm,vxm,vim}
- vmv.v.v
- vmv.v.x
- vmv.v.i
- vsaddu.{vv,vx,vi}
- vsadd.{vv,vx,vi}
- vssubu.{vv,vx}
- vssub.{vv,vx}
- vaadd.{vv,vx,vi}
- vasub.{vv,vx}
- vsmul.{vv,vx}
- vssrl.{vv,vx,vi}
- vssra.{vv,vx,vi}
- vnclipu.{wx,wv,wi}
- vnclip.{wx,wv,wi}
- vfadd.{vv,vf}
- vfsub.{vv,vf}
- vfrsub.vf
- vfwadd.{vv,vf,wv,wf}
- vfwsub.{vv,vf,wv,wf}
- vfmul.{vv,vf}
- vfdiv.{vv,vf}
- vfrdiv.{vv,vf}
- vfwmul.{vv,vf}
- vfmacc.{vv,vf}
- vfnmacc.{vv,vf}
- vfmsac.{vv,vf}
- vfnmsac.{vv,vf}
- vfmadd.{vv,vf}
- vfnmadd.{vv,vf}
- vfmsub.{vv,vf}
- vfnmsub.{vv,vf}
- vfwmacc.{vv,vf}
- vfwnmacc.{vv,vf}
- vfwmsac.{vv,vf}
- vfwnmsac.{vv,vf}
- vfsqrt.v
- vfrsqrt7.v
- vfrec7.v
- vfmin.{vv,vf}
- vfmax.{vv,vf}
- vfsgnj.{vv,vf}
- vfsgnjn.{vv,vf}
- vfsgnjx.{vv,vf}
- vfneg.v
- vfabs.v
- vmfeq.{vv,vf}
- vmfne.{vv,vf}
- vmflt.{vv,vf}
- vmfle.{vv,vf}
- vmfgt.{vv,vf}
- vmfge.{vv,vf}
- vfclass.v
- vfmerge.vfm
- vfmv.v.f
- vfcvt.xu.f.v
- vfcvt.x.f.v
- vfcvt.rtz.xu.f.v
- vfcvt.rtz.x.f.v
- vfcvt.f.xu.v
- vfcvt.f.x.v
- vfwcvt.xu.f.v
- vfwcvt.x.f.v
- vfwcvt.rtz.xu.f.v
- vfwcvt.rtz.x.f.v
- vfwcvt.f.xu.v
- vfwcvt.f.x.v
- vfwcvt.f.f.v
- vfncvt.xu.f.w
- vfncvt.x.f.w
- vfncvt.rtz.xu.f.w
- vfncvt.rtz.x.f.w
- vfncvt.f.xu.w
- vfncvt.f.x.w
- vfncvt.f.f.w
- vfncvt.rod.f.f.w
- vredsum.vs
- vredmaxu.vs
- vredmax.vs
- vredminu.vs
- vredmin.vs
- vredand.vs
- vredor.vs
- vredxor.vs
- Reduction intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vwredsumu.vs
- vwredsum.vs
- Reduction intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vfredosum.vs
- vfredsum.vs
- vfredmax.vs
- vfredmin.vs
- Reduction intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vfwredosum.vs
- vfwredsum.vs
- Reduction intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vmand.mm
- vmnand.mm
- vmandnot.mm
- vmxor.mm
- vmor.mm
- vmnor.mm
- vmornot.mm
- vmxnor.mm
- vmmv.m
- vmclr.m
- vmset.m
- vmnot.m
- vpopc.m
- vfirst.m
- vmsbf.m
- vmsif.m
- vmsof.m
- viota.m
- vid.v
- vmv.s.x
- vmv.x.s
- vmv.s.x intrinsic will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vfmv.f.s
- vfmv.s.f
- vfmv.s.f intrinsic will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
- vslideup.{vx,vi}
- vslidedown.{vx,vi}
- vslide1up.vx
- vslide1down.vx
- vfslide1up.vx
- vfslide1down.vx
- Unmasked vslideup and vslidedown intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dst argument.
- vrgather.{vx,vi}
- vcompress.vm
- vcompress intrinsics will generate code using tail undisturbed policy unless vundefined() is passed to the dest argument.
Keep this chapter none to aligned to riscv-v-spec chapters
- vdotu.vv
- vdot.vv
TODO
- vfdotu.vv
TODO